There is a lot going on these days at Amazon Web Services. Here is a brief overview to stay up to date:
New Region US-West-2
Amazon opened a new region in Oregon which is just a little bit more northern than the current US-West-1 region in in Northern California. Oregon will surely become a good alternative to US-West-1 if you need low latencies in the Western USA. Especially since California was always 10-15% more expensive than the Eastern USA region US-East-1 which is not the case with the new Oregon region. AWS offers all major services there, so there is no reason not to give it a try.
New Cluster Instance Type
Need to do some numbers crunching for your next Nobel Prize in astronomy or physics? Take the new instance type called “Cluster Compute Eight Extra Large” which offers 88 Compute Units and 60.5 GB of RAM. The basic cluster instance is 90x faster than an EC 2 small instance and can be combined into a real super computer. Clusters can be of nearly any size as long as you can pay for it. Amazon for example started one of these clusters (1064 instances) and it turned out to be the 42nd fastest super computer in the world with a speed of 240.09 teraFLOPS.
Route 53 in the AWS Management Console
Route 53, the DNS service by Amazon Web Services, becomes now more accessible. AWS added Route 53 support to its Management Console which was surely for many people the reason not to give it a try. Now, Route 53 finally becomes something really usable and you do not need to learn another bunch of API commands. Route 53 offers some unique features which are especially useful if you host within the Amazon Cloud and is good in pricing and reliability. A great how-to can be found here.
Amazon made their awesome Simple Email Service (SES) more accessible, especially for starters. You can now add, edit and test sender addresses without using the API directly in the AWS Management Console.
But even more important is the newly added feature to see stats about deliveries, bounces, complaints and rejects in a Cloudwatch-like diagram. Also your daily sending quota can be conveniently monitored.
In the past, all this had to be done through API calls and required a lot of scripting. That was for many people a too high entry barrier and that’s maybe why SES is still not the king in mass e-mailing these days (as it deserves).
I personally worked a lot with Amazon SES right from day one. I implemented a SES-to-Cloudwatch feature, went from a sending quota of 200 to 1 million e-mails per day and can strongly recommend SES. It is reliable and you can send millions of e-mails per day without a hassle. And if you start to operate at these high volumes, the very low price per sent e-mail (often 90% cheaper than other similar offers) becomes a major plus for AWS SES.
So, even if you are “just” sending several hundred automatic e-mails per day: give AWS SES a chance!
I often see confusion about the words IaaS, PaaS and SaaS, mostly if I speak to my friends about cloud and what I am doing. So here is my very simple intro to the meaning of these acronyms.
In general, words ending with …aaS are (mostly) the names of service layers, often relating to the cloud.
Means Infrastructure as a Service. Companies doing IaaS offer you the service to use their virtualized server hardware for your own needs.The difference to standard hosting providers is, the “as a service” component. It means, you do not go into any long-term commitments and you just pay for what you actually use per hour. Examples are typical cloud providers like Amazon Web Services or Rackspace.
Means Platform as a Service. This is the layer that sits on top of IaaS. It often hides the complexity of running and maintaining servers in the cloud and makes bringing a standard web app into the cloud very simple and convenient. PaaS providers care for all the scaling, uptime, server updates and server settings. They add a premium of x per cent onto the actual IaaS cost produced by a web app and charge per monthly flat or hour. These days, there are many different platform-as-a-service providers existing, the most best-known are Heroku and PHPFog (now AppFog).
That’s the web app itself. SaaS often replace existing desktop-only solutions and run, due to scalability needs, very often in the cloud. Sometimes directly on the IaaS layer and sometimes with the help of the PaaS layer. They are ad-financed or charge monthly and there are many software-as-a-service businesses existing. Just to name the first ones that come to my mind: Dropbox, Evernote, Google Docs, etc.
There are more layers existing. One is called BPaaS or Business Platform as a Service, but I won’t focus on that, since it is to esoteric for most people.
In the past fee months, Amazon Web Services got its first real competitor. Okay, maybe not a direct competitor, but at least something which could establish itself as a possible alternative for AWS. What I mean is Openstack, an open source software for building private and public clouds.
Openstack was initiated by Rackspace and NASA and implements the main features and even the APIs(!) of AWS EC2 and S3. In the end, AWS and especially its server virtualization service EC2, seems to already be the standard of “how cloud is done” these days. And open stack offers now a free, do-it-yourself infrastructure solution. This standardization makes switching of cloud providers easier and adds the first true competitive factor to the young market of cloud Inftrastructure as a Service providers.
This 4th part of the series starts with a picture: please imagine that you want to buy bread at your favorite bakery. So you go into the bakery, ask for a bread, but there is no bread there! Instead, you are asked to come back in 2 hours when your ordered bread is ready. That’s annoying, isn’t it?
To avoid such a “please wait a while” - situation, asynchronism needs to be done. And what’s good for a bakery, is maybe also good for your web service or web app.
Read more …
Tonight, Amazon Web Services released a new service called ElastiCache as beta version. It is a simple, scalable Memcached clustering solution and available in the US-East region.
Positive is the built-in health monitoring & recovery, which replaces faulty servers of the cluster automatically. Also good is that the overall pricing is similar to a normal EC2 instance, but tuned for 64bit and a high I/O capacity.
Not so good is that AWS still does not seem to learn from their recent disasters, because an ElastiCache cluster is always just hosted in one Availability Zone. When that AZ fails, your complete cluster goes down. Multi-AZ need to be manually designed in the application code which is actually a very poor solution. But AWS iterates their services often and I am sure that Multi-AZ will be available soon for ElastiCache.
Watch the official tutorial video and skip to 4:50:00 to see the ElastiCloud in action.
After following Part 2 of this series, you now have a scalable database solution. You have no fear of storing terabytes anymore and the world is looking fine. But just for you. Your users still have to suffer under slow page requests when a lot of data is fetched from the database. The solution is the implementation of a cache.
Read more …
Readwriteweb.com published a 3-parts guide about data terminology. Ever wanted to have an easy to understand definition of words like NoSQL, big data, Hadoop, or ACID? Bookmark this: Part 1, Part 2, Part 3
Last week, I had a discussion with a friend about the idea of using a MacMini as a server. Besides the fact, that it’s an overally weird (but good looking) way of hosting, I said that it’s okay to do if you need Mac OS X Server services. His main reason against a MacMini were the missing redundant hardware parts, like a 2nd network adapter, and a 2nd power adapter. And that was my answer:
Forget about the extra costs of redundant hardware parts in your server. I maintain servers since 2002, in local, remote, and cloud data centers and NEVER had a failing network adapter or power adapter. Just one time I had a failing HD - which was backed by a mirrored 2nd disc.
It was never the hardware that made me trouble, it was always the data center itself! I switched exactly due to that reason data center providers nearly all 18 months and availability always turned worse on every change.
In average, my pre-cloud data centers went down for at least 9 hours every 6 months. Sometimes more, sometimes less. And it always happened on a weekend (which is odd). In cloud data centers (mostly AWS), it turned far worse. I started using AWS for production sites in the mid of 2009 and in that year I did not suffer any outage. In 2010, outages of estimably 6-9 hours happened every 6 months to me in US-East-1. EU-West-1 run like a charm. But in 2011, nearly every quarter at least one very long outage from more than 12 hours stroke my servers in EU-West or US-East!
So what should you do for high availability in the cloud era? Do a multi-availability zones deployment always and everywhere. Distribute your app servers to all availability zones in your region and put them behind an Elastic Load Balancer. Always calculate your load reserves in under the condition of one failing AZ. And do the same with your data. Use master-slave (EC2 MySQL), Multi-AZ (RDS MySQL) or replica sets (MongoDB). And do automatic snapshots of your important EBS volumes.
What is your experience with the availability of data centers?