Archive

Archive for the ‘Amazon AWS’ Category

Cloud Projections for a near future-factualFiction

April 12, 2011 Leave a comment

Hi all,
Cloud is now getting mature enough to tell the big bosses in big companies to take some big radical decisions, because its the big time now.
Introduction

So what should we expect from the rain-bucket i.e. cloud? I am going to write about what implications the current trending will bring, what sort of new models can emerge, what opportunities can come, and what threats might stream with it.
Mainframe to Client/server had a generational impact and a generational shift in mentality. It also shaped the world with its opportunities, and challenges. Then came the convergence and the life-of-all became inter-meshed. Then came the marriage of telecom sector with the IT sector. Then came the tilting of social-media from vertical to an awesome horizontal. Now its time to go back to the old days of electricity.

Cloud trends across multiple service architectures:

I am trying to think not to write in a manner that this blog-post becomes again a mundane appraisal of cloud computing. But an inspection to see the trends and what tweaks we can introduce and what implications that would bring, is the issue here.
The IT companies have no doubt set the trend for others to follow suite, and showed tremendousness business-case opportunity for those who were sceptic in the first place. Telecom sector along with its IMS (IP Multimedia Subsystem), SIP, SS7 Stacks or INs (Intelligent Networks), WiMAX (Worldwide Interoperability for Microwave Access), Femtocells, and LTE (Long Term Evolution) has mesmerised Operators to use their Pipes to service customers at other service levels e.g. finance, medical, engineering, etc. Now IT and Telecommunications sector has the opportunity to be in the same room, and play the metal for the audience. Lets try our IF mentality i.e. IF this happens then this can happen and IF not then that.
Cloud service providers will be able to make cloud data centers in each country in a near future, lets say 2-3 years. Currently we have few datacenters in multiple countries BUT China, India, Canada, Nepal, Mexico, Brazil, Pakistan, Russia still don’t have that much to office in terms of cloud provisioning for the public. So if that happens, it will push certain behaviour in non-IT sectors. Telecom companies that are in every country perhaps multiple in each country, will be able to make cloud applications/services which are specialised with respect to locality, aggregation, security, and performance characteristics. Added to this flavour will be the horizontal aspect of social media. Added to this is the ease of access and broadband speeds that consumers will have in two to three years will grant throttle perspective to cloud computing where SOA will gets its out-of-the-box access, creation, maintenance, and serviceability from enterprise level to bare metal consumer level.

Lets dig up the protos:

Spot Cloud, Storms in Clouds, intercloud, API explosion, and multiple implementations has already started a shift in academia to get to the standardisation stage for clouds. IEEE has opened its hands to get the standardisation outputs, IETF is on its DOING to get it standardised, and Orange, Ericsson Research Labs, and AT&T are moving in the directions of how to standardise their own or public implementations of clouds using their initiatives e.g. SAIL (Scalable and Adaptive Internet Solutions) by Ericsson.
Now comes the BIG V i.e. virtualization. Distributed virtualization is gaining more strengths and fluid nature and easy management of VMs will boost another layer of virtualization to be formed. This layer will have tendencies to form SOA models from within this layer. So the front ends to this will be used in enterprises to have their own machines flowing in their own distributed virtualization pool and a much more generic output will come with architecture independence. I will touch this somewhat later in a new post.

Security concerns never die

Security concerns as they are now in the market are pretty much natural but have lack of judgemental treatment. If enterprises will not solve this psychological milestone or cloudKnot, then we will see more PRIVATE then PUBLIC in the next few years. If they solve it then TESTING in the cloud will be augmented with application delivery with Telecom Pipes coming IN and OUT of the public clouds too. Currently not possible, but this radical step might be in the way. But be cautious, the main security threat will COME at that time. I guess the concerned guys know what this prick (myself) is writing here.

Have a nice day.
Enjoy

Is Cloud Consultancy Apocalypse coming in next few months?

March 10, 2011 Leave a comment

Shlomo Swidler and Reuven Cohen wrote very good starup-posts on how to select the contractors for cloud consultancy, back in 2009, when the cloud-roar was in its infancy. Back then the topic of interest was around Amazon EC2. In this post I will try to generalize that to multiple clouds, and will look it in terms of whats happening 2011, 2 years after they wrote the articles.
They gave guidelines of how to select a cloud consultant. I would check out in further highlighting whats happening in the arena. So lets kick off.

Currently there is a big move of all sorts of enterprises to shift to cloud. Since many of these enterprises don’t have that level of cloud knowledge as a dedicated one OR who is working on it for a long time as the posts from the guys (Shlomo and Reuven) recommend them to be, the market for the cloud consultants is far more attractive now than before. And we see many good market players and startups jumping into the cloud consultancy. They are using all sorts of tiers, and many of them even can manage all the services in almost all the cloud providers.

So what dimensions are being picked up by these cloud consultants, and how they are specially valid for the current shift in enterprise tastes. Obviously the homework done by these cloud consultants is on the basis of some kind of need by enterprises, but whats the main thing that derives enterprises to focus more on consultancy than geting involved into geting experts in cloud management and provisioning. To become an expert they would have to go through some kind of learning curve and even then they have to trust their own employees for that. Infant trust is the kind of things in which most bigger enterprises don’t beleive normally. So the coming months could be some kind of a biggy for cloud consultants. But still one more thing is lagging. And that is how important and URGENT the need is for the enterprises to shift to cloud, specifically in 2011?

To answer this question we would have to dig user behavior and not the development behavior thats going on in enterprises. I may be wrong in here but would love to think it back. That analysis will also come, hopefully if I have ink left on my keyboard for tomorrow.

Behavior, Experience, and Band Wagon:
Migration consultants typically first go through the Cloud adoption assessments. These assessments weigh heavily on the correlation between the internal datacenters and cloud environments. I guess the weigh should be equal in terms of what user behavior, experience, and band wagon demands. They should include some kind of services level assessments too. Many of them do too. But the pronounced effect that tells an enterprise that its time to rely on consultants isn’t that powerful, to me I guess. One obstacle is the concerns of the enterprise to hide the secrets of the service that they are providing in their local datacenters. So a true augmentation of the Shlomo’s post can help these enterprises alot. But consultants have to push this kind of knowledge to the enterprises. Its their responsibility.

When a company thinks of adopting the cloud, they have to think in terms of some kind of shift in current strategies for

  • development
  • configuration and environment
  • operations and monitoring
  • provisioning and metering
  • data auditing
  • costing

So consultant fall in the above areas. They were individually targeting each section back in 2009, but now they have gone as far as providing consultancy in all or mix of these areas. Plus since multiple clouds exist, they can provide in multiple cloud environments. Even phased migration doesn’t mean that an enterprise get rid of the contractor after the last phase of migration. Now the game will turn towards TRUELY MANAGED SURVICES, atleast for the time when enterprise develops its own skills in cloud.

Knowledge of Business cases
While working with enterprises cloud consultants will be able to know deep enough of the business cases that are thinked of in enterprises. So it would be a kind of strange mix, where a cloud consultant will know you and your customers more than your enterprise does. I may be wrong but my braincells are tweeting me in this area. So this will help accelerates the consistancy in apocalypse.

Cloud consultancy dimensioning
Cloud consultants doesn’t need to think in only one aspect. They have Iaas, Paas, and Saas in the armory. And among these 3, they have so many sub-THINGS that you can’t imagine. Virt, I/O, scalability, performance, monitoring VMs, monitoring apps, security complience and much much more. And thats hell lot of stuff for the enterprise to learn in a few courses. They need experience. So we get the cloud consultant apocalypse on over hands.

Survival of the fittest
I hate the medical one, but love the innovative, and enterpreneurial one. Cloud consultants have to best to be picked up by enterprises. And they to provide the same kind of commitement to all sorts of enterprises whether small or large, because they use the same kind of tools for any kind of deployment and monitoring. So this consistancy will help competition, which will be the roar or cloud consultancy.

Startups in non-consultancy can benefit from being consultants
Startups, which provide their services in the cloud and manage those CAN make another tier of teir company where they provide cloud consultancy to smaller companies or NEW startups. Thereby creating a new form of cloud consultancy spectrum where they will be the enterprise at some future point in time, and will have the necessary skill to be in new frontiers of the cloud services or research. I hope some guys must have got my point.

Thats it for now. I would love to see comments.

Some Cloud System-Testing Tools

February 25, 2011 Leave a comment

Hi all,

Its been a while since my last post. Just trying to juggle up my memcapsules to create something of use. Here is a list of Cloud testing and performance measurement tools. Application performance tools and the variation in techniques is the hidden element that I will try to get out of them. Have a nice read.

1- Phoronix Test Suite:
Its an automated open-source (GPL License:)) testing framework, which has php-cli dependencies. It give graphical output of the tests underway. It can test many suites of applications running over amazon EC2. Those who don’t want to use CLI can employ GTK2 GUI over php. The basic idea behind is AUTOMATED TESTING.
It comes with 130 test profiles and 60 test suites. Plus you can create and add your own test profile and suite by using their extensible architecture approach using just XML. It can monitor power consumption, disk-storage, system-memory, CPU utilization, disk-read/write speeds, graphics performance, and motherboard components.
Tests can be scheduled using Phoromatic.com.
You can find regression pattern in application software. Currently those folks have found regression on Linux Kernel using this test suite.
USE CASE: Deploy Phoronix TS on your local machine and then in Cloud. Run both of them and compare the results.

2- SOASTA:
Simply awesome. Used by well-renowned companies like Netflix, Intuit, Chegg, Microsoft, P&G, Cisco, AmericanGirl etc. Its commercial and on-demand. A demo is available.
It provides web-based distributed multi-object multi-threading testbench. It can create multiple objects like browsers, UI elements, AJAX put/gets, http(s) requests on multiple devices like mobile and desktop. More on that is that you can get into the live test and see what are the current bottlenecks.
Check out their Global Test Cloud offering which will give you testmarks on multiple clouds. You can simulate web-traffic on any cloud platform with any amount of load that you want to target. AND remember its TURNKEY. Isn’t that awesome. AND its pay-as-you-service. Isn’t that awesome too.

3- CloudTesting.com:
It provides automated website testing services. Plans for pricing start from 100 Pounds per month. It operates on SAAAS model.

4- PushToTest:
It runs on your test equipment, on cloud, or both. Runs on distributed TestMaker test environment. Its also on-demand. It can run on multiple cloud environments and global user traffic is one dimension of it. It also supports Collabnet Cubit. You can organize tests using simple XML configuration files. Customers include intuit and cisco.

5- SOAPSonar:
Its an offering from CrossCheck Networks providing testing for web-services and ESB. It can indulge itself in software, VMWare, and cloud image. It provides SOAP, XML, and REST testing over HTTP(s), MQ, and JMS protocols. It provides functional, performance, compliance, and security testing in cloud. One can validate SLA rates in terms of throughput, and capacity statistics from the back-end service in cloud, in performance testing mode.
Its a commercial offering, and many feature segregated editions are available along with a trial version. You can compare editions here.

6: Bonnie++ filesystem benchmarking:
In the words of Linus, the founder/maker of linux kernel, it is “reasonable disk performance benchmark”. SUN guys used it all the time. And its been around since 1996. Both 32 and 64-bit versions are available. It measures hard-drive and filesystem performance. If you wish to test different zones of a hard-drive, it would be good to use ZCAV along with bonnie++.

7: Sysbench MySQL benchmarking:
Back in 2006 SUN published Solaris vs REDHAT stuff, based on the output of this tool. Till now its been matured. It allows you to test file I/O performance, schedular performance, memory allocation, and transfer speed, POSIX thread implementation performance, and database server performance. First four are good for platform evaluation. E.g. test these four paramters on your local datacenter and then on public cloud to test both the platforms against your application. It uses LUA as scripting language, so you can write scripts into it for your own ride. Its a very good tool OR infact is for OLTP (OnLine Transaction Processing).

8: ProbeVue:
Those well-versed in knowing/comprehending that probeVue is the best, why I haven’t mentioned it in the first place. Well I don’t wanna answer that question, since I am waiting for it to be ultimate for me and available in other coding stylses also rather just in C. Apart from it, its the best of the best from IBM. It is basically a lightweight dynamic tracing utility, and many people think (ones like William Favorite from The Ergonomic Group), that its the future of system introspection.
When using ProbeVue a developer can select what he/she wants to extract from an event, and still able to make his/her own events using Vue as language. Here is the comparison from William’s Slides

I will be adding more to this post. Please add yours in the comments section.
Cheers

Categories: Amazon AWS, Cloud, Performance

6 Phases of successful migration of Enterprise Apps to the Cloud

February 6, 2011 1 comment

NOTE: This post is a sort of my abstract version with comments for a great document available here http://media.amazonwebservices.com/CloudMigration-main.pdf

Phase 1: Cloud Assessment Phase:

In this phase ask yourself these questions:

a- Whats the difference in Cost, Security, and Complience in your Data Center Reilm and Cloud Reilm?

b- Do you have a business case in hand? And Who in your organization knows about this and how much? Are the implementors aware of what part they have to play?

c- When you talk about Cloud, you have to take COMPUTE, STORAGE, AND TRANSPORT in mind. Have you got any plans for that? How will you handle compute, storage, and transport? A pre-assessment study should yield a start-off plan. Else application metering in compute domain, storage domain, and transport domain is necessary factor in Cloud for Enterprises expecially Telecom Enterprises.

d- Your security advisory and auditing advisory should have an assessment plan before hand for the Cloud, have you involved them OR you are just thinking of feeling the good guts?

e- Have you characterized the sensitivity of the Data that will be ported or kept?

f- Have you classified your enterprise application based on its dependencies and risk?

Dependencies: 1- Applications with Top-Secret, Secret, or public Data sets

2- Applications with low, medium, or High complience requirements

3- Aplications that are internal-only, partner-only, and customer-facing

4- Applications with low, medium, and high coupling

5- Applications with strict and relaxed licensing

Phase 2: Proof of Concept Phase:

In this phase consider the following points:

a- The goal here is to learn about the Cloud provider while you are in direct contact with it. Deploy a simple app and then see the output.

b- Approach your assumptions with real measured data of an example installment/deployment

c- Start with a small Public Dataset that depends on an application which has similar dependencies as your enterprise application

d- The purpose of the proof of concept is that you wet your hands and make a case for critical next-step evaluation based on phase 1.

Phase 3: Data Migration Phase

In this phase consider following points:

a- Involve Enterprise Architect into the equation. 

b- Evaluate Cloud storage options against your local-storage options

c- NoSQL or Relation Database?

d- Estimate the effort required to migrate data to the Cloud.

e- Get some metering software like OpenCore 6.1 that will measure latency, and response-time of read-write data on datasets.

f- If you don't have data or you only deal with real time non-persistance data, have a coke and enjoy the next phase. 

Phase 4: Application Migration Phase

a- Learn about forklift migration strategy and hybrid migration strategy. What you choose is important.

b- Is your application stateless or stateful?

c- In forklift you port entire application at once with minimal code changes and it deals with stateless apps

d- In hybrid approache you can move parts of the application one at a time. 

Phase 5: Leverage the Cloud Phase

This is the phase where your application lies in the cloud as you planned. In this phase consider the followings:

a- Now you think of auto-scaling, edge caching your static content, auto-recovery, and elasticity.

b- How about business continuity with the new knowledge at hand about cloud-aware applicatoin?

c- Network level parameter estimation should be considered also. Connectivity constraints should be put to desk.

Phase 6: Optimization Phase

In this phase consider following:

a- How will you optimize the application in terms of cost savings?

b- You pay as you go means if your application is highly optimzation, you will have to pay less too.

c- Get your highly qualified software architects and solutions manager to think about new ways to optimize using code optimization, dataset optimization etc.

d- You can get alot of help for optimization if you run metering and code-probing softwares on your application in the cloud.

e- Improve caching

Conclusion

If you do all of the above, you have successfully migrated the enterprise application to the cloud, BUT still you need to rethink factors according to your business case or organizational plan.

Ask Yourself! Basic Application-Migration Consideration Questions-Part 1

February 6, 2011 1 comment

I have compiled few basic questions that are needed to be answered beforehand as the first step in migration-strategy. This part deals with IT Enterprise without the access-network implications. Part 2 will focus on Telecom Enterprise, and the perspective will be from access point of view.

So these are the basic questions that are needed to be considered by a person who is going to migrate the application without its network ramifications.

1- Which technologies we should use in the Cloud to PERSIST DATA?
2- How data will be shared between entities inside the Cloud and entities inside the local data center?
3- Application metering at Cloud should be compared with the application metering in local Data Center. Whats the difference between the two metering outputs?
4- Should we adopt the PHASED MIGRATION STRATEGY or SINGLE-THROW MIGRATION STRATEGY?
5- Should we consider NoSQL or SQL? Should we flatten the databases? Is there any need to convert a relational database to a NoSQL type database?
6- Cross-zone is definitely not an issue, BUT Is Cross-region an issue for Operators especially in Telecom perspective?
7- If you want to migrate the DB, then how would you be able to sync Oracle etc DB with the SimpleDB or S3 or any NoSQL offering?
8- Do you know MIGRATING DATA and MIGRATING APPLICATION SOFTWARE are two different things? IF YES, then how will you migrate DATA? (Compare above questions!)

If we can answer these questions, we might be able to get to a successful migration.

Recovering Deleted Files on a Linux System

February 4, 2011 2 comments

Sometimes you loose files on a system, loose them as in DELETE them. Many people think its no way to get the files back OR they would have to use some expensive software to do so. But in linux this can be achieved very easily. I will enlist two ways here, and I will ask two questions from the Linux community too. Lets start:

First Way:
1- In root you have /proc directory which contains the process IDs of your processes. Each file you create has an iNode in which it resides, and a reference. The reference to the file is actually the file/folder/directory which you delete. So you never delete the actual iNode, you only delete the reference, but it looks to you that you have deleted it from your system altogether.
2- Each iNode gets a process ID and a file description which can be used to recover the file. So how do you know the process ID of the file you just deleted. Here is the command for it

less lsof | grep “your_deleted_file_name_with_location”

This will list the following as output:

less 14675 zombie 4r REG 8,1 21 5127399 /home/zombie/test_file (deleted)

The second column is your Process ID i.e. 14675 in my case. The fourth column lists the file-descriptor i.e. 4 in my case.

3- Now you know the Process ID and file-descriptor, lets copy the file from the iNode to your preferred location by running the following command

cp /proc/14675/fd/4 recovered_file

So you just created a new file called recovered_file which contains the contents of the file that you deleted.

Now I have a question for Experts that is it possible to recover the files without zombie-ing it?

Second Way (Easy):
1- You can use SCALPEL utility to recover your files. It can scan upto 16 EB (Exabytes) of disks, in one go.
2- I will use Ubuntu 10.04 to download it by using

sudo apt-get install scalpel

3- Now open its configuration file located in /etc/scalpel/scalpel.conf
4- Uncomment i.e. remove the # charater from the start of the line for extensions that you want scalpel to search for in DELETED domain. OR simply read the whole configuration file (small one) in order to know what i am saying.
5- Now create a directory somewhere and name it RECOVERED. This directory will hold all the recovered files i.e. scalpel will save all the files that were deleted in this directory.
6- Now use the following command to reclaim/recover all the deleted file of the extension that you wanted

sudo /dev/sda -o RECOVERED

7- After the scanning process is over, open the RECOVERED directory and check to see your recovered files.

Some Questions for Experts:
1- If I delete a file in windows and try to recover it in linux, would I be able to do that?
2- How scalpel and recovery of files will work in a Virtuallized Environment e.g. Amazon EC2 Cloud?

Have a nice time.
Cheers

Categories: Amazon AWS, Linux, Linux tools

Digging the surface of a Cloud

February 1, 2011 1 comment

Cheers to all those who like the rains.But interested in how they are made i.e. rains.

Currently I am working on mounting an application on Amazon EC2. To start with, knowing Amazon is on one place is not enough. Amazon provides a lot of EC2 instances BUT they are not enough too. They do mention that they are going to give support of new instances with varying application needs, but how long it will take to get out in the market isn’t well seen of. On the other hand Amazon gives you ability to deploy your own HANDMADE Amazon Machine Instances. So its good news for guys those who can make, and a challenge for those who can’t.

So to start the digging you need the tools to dig. Imagination to dig won’t do much here. So what are the tools to dig the Amazon Cloud? Amazon provides you APIs and a good set of Command Line tools or reference. They also provide a web interface to manage your instances. And if you want to know the performance parameters of your instances, there is another service from Amazon called CloudWatch. But does Amazon provides tools like VMWare cCloud Director on which Zenoss has implementation a cool Cloud monitering tool too. Zenoss mentions this in its post

Zenoss Provides Unified Visibility and Real-Time Awareness of the Entire vCloud infrastructure

So the question arises, is there any kind of tool available in the market which addresses this need for Amazon OR CloudWatch is enough. Am waiting for a good 1-to-1 comparison or analysis of both of them. So if anybody has any idea, please comment.

 

Categories: Amazon AWS