J Wolfgang Goerlich's thoughts on Information Security
Perimeter-less Security and Clouds on the Horizon

By wolfgang. 25 April 2008 04:14

Cloud Computing: Eyes on the Skies gives a good definition of Cloud computing.  “Cloud computing is similar to what the tech industry has been calling "on-demand" or "utility" computing, terms used to describe the ability to tap into computing power on the Web with the same ease as plugging into an electric outlet in your home. But cloud computing is also different from the older concepts in a number of ways. One is scale. Google, Yahoo! (YHOO), Microsoft (MSFT), and Amazon.com (AMZN) have vast data centers full of tens of thousands of server computers, offering computing power of a magnitude never before available. Cloud computing is also more flexible. Clouds can be used not only to perform specific computing tasks, but also to handle wide swaths of the technologies companies need to run their operations. Then there's efficiency: The servers are hooked to each other so they operate like a single large machine, so computing tasks large and small can be performed more quickly and cheaply than ever before. A key aspect of the new cloud data centers is the concept of "multitenancy." Computing tasks being done for different individuals or companies are all handled on the same set of computers. As a result, more of the available computing power is being used at any given time.”

Clouds are on the horizon. I know very few data centers that host everything internally. Most, including my own, deliver a mixture of desktop applications, client-server applications, and hosted (e.g., cloud) web apps. The shift has an immediate impact on security planning. Information security architectures began with terminal-server applications and focused on strong perimeters. With apps moving to the desktops, the perimeter became a little wider and a little more porous. But we could still control the information, by restricting what data was on the desktops and using technologies like end-point security. In fact, one might argue that many of our controls today are based around restricting information to the data center and keeping it off the desktops. The next major shift, which we are already starting to see, is moving the information from data centers to third-party hosting providers. This is only going to accelerate as young people, weaned on MySpace and Gmail,  join the workforce. Another accelerant which we may see in the next few years is another economic downturn. Both sociological and economical changes are moving the data from controlled perimeters to uncontrolled open spaces. The clouds on the horizon are coming nearer.

The open question is this: how do we build controls in an age of perimeter-less security?

 

Tags:

Architecture | Security Information Management

My standard IOmeter test

By wolfgang. 23 April 2008 06:33

A good disk storage subsystem test case takes into account several factors. What are the differences between different IO block sizes (4K, 8K, 16K, 32K, and 64K)? What are the differences between reads and writes? Between streaming and random IO? A good test case covers the variety of IO types to get a sense of the storage subsystem performance.

I uploaded my latest attempt at such a test case (/files/Full-IO-Test.txt). Download and save the test as a .icf file, and open it using IOmeter 2006.07.27 on a Windows or Linux OS. There are twenty-five tests in the file named XK n x/x WR (write/read) x/x SR (streaming/random). For example, the ninth test is 4K 9 75/25 WR 25/75 SR and uses 4K IO, 75% writes, 25% reads, 25% streaming, and 75% random.  I recommend running each test for one hour. The entire test case will then take a little over a day to complete.

Please let me know what you find with this test file. How can I improve the accuracy of the results?

 

Tags:

Systems Engineering

Virtualization for Disaster Recovery: Strategies

By wolfgang. 6 April 2008 09:07

Using virtualization as a disaster recovery strategy can in one of two scenarios:

First scenario is vm to vm. Put a hypervisor at the production site and another at the recovery site. Run the production server in a vm. Replicate the vm drives to the recovery site. During a disaster, boot the vm up on the recovery hypervisor.

The second scenario is bare metal to vm. Put a physical server running on bare metal at the production site. Stage the physical server with the necessary vm drivers (in Hyper-V, this is called the Integration Components.) Put a hypervisor at the recovery site. Replicate the disks. During a disaster, boot the server up as a vm on the recovery hypervisor. The second scenario requires block level replication and the ability for the hypervisor to read native disks. If both of these requirements are not possible, an alternative solution exists. This is to restore the production server into a vm using software that supports VM P2V DR. Examples of this software include Acronis, Arcserve, and Backup Exec. The downside is that this option takes significantly longer.

Tags:

Business Continuity | Virtualization

Virtualization for Disaster Recovery: Metrics

By wolfgang. 5 April 2008 22:41

Some quick thoughts on using server virtualization for disaster recovery. The key metrics in using VMs for DR is RTO and RPO. These are defined during the BIA process. One question that I wrestled with was how to get a near time RTO (within minutes before the disaster) and a rapid RPO (within 1hours after the disaster).

Traditional P2V techniques rely on a live system or a nightly backup, so RTO is up to 24 hours. Traditional P2V also relies upon writing the data back out into virtual disks, so the RPO for our average server was up to 7 hours. We addressed these challenges by keeping the storage on a backend SAN and pointing the disk into the VM in the event of a disaster. The RTO is then near time and the RPO is an hour or less.

The DR strategy requires native NTFS disk access and SAN support. Both VMware ESX and Hyper-V support this type of DR. Linux based hypervisors such as Xen do not.

Tags:

Business Continuity | Virtualization

Effective Presentation Techniques

By wolfgang. 3 April 2008 00:58

As a company that trades on the open market, we have many rules and regulations to follow in regards to employee trading. Part of this is a yearly training session on trading compliance and the systems we provide. I sat thru this a few weeks back. It was the normal session about not accepting cash or large gifts, and not trading stocks that the company is currently researching or holding, et cetera.

At the end of the presentation, the head of the compliance team stood up. She thanked her staff for the presentation and thanked all of us for coming. Then she pulled out a newspaper. She described the company in the news and how similar it was to ours in size, focus, and culture. She then read from the paper. This company had a compliance lapse. The SEC fined the company millions of dollars, and fined the person who executed the trade over a hundred thousand. The room fell dead silent.

I thought this was a particularly effective technique to reinforce the idea that the rules we follow really do matter.

Tags:

General

    Log in