Want to display see at what version control system branch you are currently at your Bash or ZSH? I tested some stuff, but this is the fastest and easiest to setup.
This step-by-step guide will provide you with a Ubuntu 13.04 server as guest using Virtualbox 4.2 on Mac OS X 10.8 as host. It has all the bells & whistles like password-less SSH, full network settings, Ubuntu guest additions and a shared and auto-mounted folder of the host inside the guest.
Following the latest studies from 451 Research and Gartner, 6.1% of the global hosting market ($51bn) are already using public, cloud-based infrastructure (IaaS & PaaS).
On 7th of March 2013, policymakers will speak about the latest developments in cloud computing. The one-day conference will discuss how Europe can fully take advantage of the benefits provided by the Cloud.
The conference is hosted at The Renaissance Hotel in Brussels, costs 150 EUR and here is the agenda.
In an interview, and even during the short CV scanning, I try to find candidates that are smarter than I am in their field of work. I hire people from where I can learn from and with whom I really enjoy discussing new challenges or ideas.
A proven out-of-the-box thinking record is maybe the most important factor for me on a candidate. I don’t need yeasayers or zombie people who blindly follow my brain.
I need people who have own ideas and who are brave enough to proof that their ideas are better than other, but are also willing to be held responsible for failure.
Leading smart out-of-the-box thinkers is just about reminding them on the general company vision and providing them with the most liberal working conditions. This frees the company management to focus on the really important things (company scaling and product) and actually allows a startup to scale.
And I learned my lesson the hard way: whenever I wanted to install a middle management in my still young companies (less than 50 people) I realized that I just had the wrong people in some positions.
So, be bold and just hire people that are smarter than you. And no one else.
I am working on automating the bootstrapping and provisioning of a zoo of cloud servers and had to decide (again) what path to choose. I could build my own customized image and go from there or use a vanilla out-of-the-box Linux image and set it up during bootstrapping.
In the past, I chose the first path, because it allowed to quickly start new servers from one custom image (for example with PHP + Nginx), but on every system upgrade or change I had to fix running servers and create a new image for every introduced change. Which was a hell of work, especially if I had to keep all images in sync between cloud regions and maybe even cloud providers.
I am a fan of Star Trek and I am 32 old. Just recently I realized that every male person I know in private life or at work is also a fan of Star Trek. They are all my age or some years older and are all working in the tech industry as creatives, techies or executives.
My first contact with Star Trek was at the early 90s at the age of 12. For the next couple of years, I watched every afternoon Star Trek The Next Generation (TNG) and was totally blown away. Not by the characters, my personal favorite Tasha Yar already died in season 1, but by the absolute positive picture of the future.
It’s been nearly 5 months since my last blog post. The reason why I did not blog were that I was busy (what a bad excuse), but mostly that I was unsure about the content. I did not like to be the 1000th blog repeating fancy PR articles by cloud firms, so better shut up.
The time I stopped blogging was also the time I personally dove into really serious cloud high-tech stuff and I would have loved to blog about it, but it was still absolutely confidential. Today the tech side of my work still is mostly confidential, but I am also doing a lot of other cloud-related topics at work and so I will change this blog now into being more of a personal work weblog.
I will blog about latest cloud tech, but also about its business in general.
All that may sound blurry, but I promise it will be clear soon and I will unveil what I am working on, step-by-step. In general, the blog will be more specific, more personal, but also more cloud.
Do you want to be immediately informed via SMS when a AWS Availability Zone goes down? Or any other major event happens in the Amazon Cloud?
I found a free(!) way for this task and it even works for international cell phone numbers. The solution is to use the new and free web service “If this then that” to monitor the AWS Status RSS feeds. In Ifttt-speak that means: “IF THIS (=a new post at the AWS RSS feed arrives) THEN send the post to me via SMS”. And here is the step-by-step instruction:
There is a lot going on these days at Amazon Web Services. Here is a brief overview to stay up to date:
New Region US-West-2
Amazon opened a new region in Oregon which is just a little bit more northern than the current US-West-1 region in in Northern California. Oregon will surely become a good alternative to US-West-1 if you need low latencies in the Western USA. Especially since California was always 10-15% more expensive than the Eastern USA region US-East-1 which is not the case with the new Oregon region. AWS offers all major services there, so there is no reason not to give it a try.
New Cluster Instance Type
Need to do some numbers crunching for your next Nobel Prize in astronomy or physics? Take the new instance type called “Cluster Compute Eight Extra Large” which offers 88 Compute Units and 60.5 GB of RAM. The basic cluster instance is 90x faster than an EC 2 small instance and can be combined into a real super computer. Clusters can be of nearly any size as long as you can pay for it. Amazon for example started one of these clusters (1064 instances) and it turned out to be the 42nd fastest super computer in the world with a speed of 240.09 teraFLOPS.
Route 53 in the AWS Management Console
Route 53, the DNS service by Amazon Web Services, becomes now more accessible. AWS added Route 53 support to its Management Console which was surely for many people the reason not to give it a try. Now, Route 53 finally becomes something really usable and you do not need to learn another bunch of API commands. Route 53 offers some unique features which are especially useful if you host within the Amazon Cloud and is good in pricing and reliability. A great how-to can be found here.
Amazon made their awesome Simple Email Service (SES) more accessible, especially for starters. You can now add, edit and test sender addresses without using the API directly in the AWS Management Console.
But even more important is the newly added feature to see stats about deliveries, bounces, complaints and rejects in a Cloudwatch-like diagram. Also your daily sending quota can be conveniently monitored.
In the past, all this had to be done through API calls and required a lot of scripting. That was for many people a too high entry barrier and that’s maybe why SES is still not the king in mass e-mailing these days (as it deserves).
I personally worked a lot with Amazon SES right from day one. I implemented a SES-to-Cloudwatch feature, went from a sending quota of 200 to 1 million e-mails per day and can strongly recommend SES. It is reliable and you can send millions of e-mails per day without a hassle. And if you start to operate at these high volumes, the very low price per sent e-mail (often 90% cheaper than other similar offers) becomes a major plus for AWS SES.
So, even if you are “just” sending several hundred automatic e-mails per day: give AWS SES a chance!
I often see confusion about the words IaaS, PaaS and SaaS, mostly if I speak to my friends about cloud and what I am doing. So here is my very simple intro to the meaning of these acronyms.
In general, words ending with …aaS are (mostly) the names of service layers, often relating to the cloud.
Means Infrastructure as a Service. Companies doing IaaS offer you the service to use their virtualized server hardware for your own needs.The difference to standard hosting providers is, the “as a service” component. It means, you do not go into any long-term commitments and you just pay for what you actually use per hour. Examples are typical cloud providers like Amazon Web Services or Rackspace.
Means Platform as a Service. This is the layer that sits on top of IaaS. It often hides the complexity of running and maintaining servers in the cloud and makes bringing a standard web app into the cloud very simple and convenient. PaaS providers care for all the scaling, uptime, server updates and server settings. They add a premium of x per cent onto the actual IaaS cost produced by a web app and charge per monthly flat or hour. These days, there are many different platform-as-a-service providers existing, the most best-known are Heroku and PHPFog (now AppFog).
That’s the web app itself. SaaS often replace existing desktop-only solutions and run, due to scalability needs, very often in the cloud. Sometimes directly on the IaaS layer and sometimes with the help of the PaaS layer. They are ad-financed or charge monthly and there are many software-as-a-service businesses existing. Just to name the first ones that come to my mind: Dropbox, Evernote, Google Docs, etc.
There are more layers existing. One is called BPaaS or Business Platform as a Service, but I won’t focus on that, since it is to esoteric for most people.
In the past fee months, Amazon Web Services got its first real competitor. Okay, maybe not a direct competitor, but at least something which could establish itself as a possible alternative for AWS. What I mean is Openstack, an open source software for building private and public clouds.
Openstack was initiated by Rackspace and NASA and implements the main features and even the APIs(!) of AWS EC2 and S3. In the end, AWS and especially its server virtualization service EC2, seems to already be the standard of “how cloud is done” these days. And open stack offers now a free, do-it-yourself infrastructure solution. This standardization makes switching of cloud providers easier and adds the first true competitive factor to the young market of cloud Inftrastructure as a Service providers.
This 4th part of the series starts with a picture: please imagine that you want to buy bread at your favorite bakery. So you go into the bakery, ask for a bread, but there is no bread there! Instead, you are asked to come back in 2 hours when your ordered bread is ready. That’s annoying, isn’t it?
To avoid such a “please wait a while” - situation, asynchronism needs to be done. And what’s good for a bakery, is maybe also good for your web service or web app.
Positive is the built-in health monitoring & recovery, which replaces faulty servers of the cluster automatically. Also good is that the overall pricing is similar to a normal EC2 instance, but tuned for 64bit and a high I/O capacity.
Not so good is that AWS still does not seem to learn from their recent disasters, because an ElastiCache cluster is always just hosted in one Availability Zone. When that AZ fails, your complete cluster goes down. Multi-AZ need to be manually designed in the application code which is actually a very poor solution. But AWS iterates their services often and I am sure that Multi-AZ will be available soon for ElastiCache.
Watch the official tutorial video and skip to 4:50:00 to see the ElastiCloud in action.
After following Part 2 of this series, you now have a scalable database solution. You have no fear of storing terabytes anymore and the world is looking fine. But just for you. Your users still have to suffer under slow page requests when a lot of data is fetched from the database. The solution is the implementation of a cache.
Last week, I had a discussion with a friend about the idea of using a MacMini as a server. Besides the fact, that it’s an overally weird (but good looking) way of hosting, I said that it’s okay to do if you need Mac OS X Server services. His main reason against a MacMini were the missing redundant hardware parts, like a 2nd network adapter, and a 2nd power adapter. And that was my answer:
Forget about the extra costs of redundant hardware parts in your server. I maintain servers since 2002, in local, remote, and cloud data centers and NEVER had a failing network adapter or power adapter. Just one time I had a failing HD - which was backed by a mirrored 2nd disc.
It was never the hardware that made me trouble, it was always the data center itself! I switched exactly due to that reason data center providers nearly all 18 months and availability always turned worse on every change.
In average, my pre-cloud data centers went down for at least 9 hours every 6 months. Sometimes more, sometimes less. And it always happened on a weekend (which is odd). In cloud data centers (mostly AWS), it turned far worse. I started using AWS for production sites in the mid of 2009 and in that year I did not suffer any outage. In 2010, outages of estimably 6-9 hours happened every 6 months to me in US-East-1. EU-West-1 run like a charm. But in 2011, nearly every quarter at least one very long outage from more than 12 hours stroke my servers in EU-West or US-East!
So what should you do for high availability in the cloud era? Do a multi-availability zones deployment always and everywhere. Distribute your app servers to all availability zones in your region and put them behind an Elastic Load Balancer. Always calculate your load reserves in under the condition of one failing AZ. And do the same with your data. Use master-slave (EC2 MySQL), Multi-AZ (RDS MySQL) or replica sets (MongoDB). And do automatic snapshots of your important EBS volumes.
What is your experience with the availability of data centers?
The guys from Skills Matter are currently organizing the first NOSQL eXchange Conference. It will be in London on Nov 2. It’s about NoSQL technologies like MongoDB, CouchDB, Cassandra, Riak and Neo4J. The best: Early Bird tickets are just 50 GBP!
More information you can find here:
I just run some searches in Google Trends about databases. Interestingly, MongoDB very early succeeded over CouchDB. And here is the falling trend of MySQL and similar relational databases:
After following Part 1 of this series, your servers can now horizontally scale and you can already serve thousands of concurrent requests. But somewhere down the road your application gets slower and slower and finally breaks down. The reason: your database. It’s MySQL, isn’t it?
Latest now, the required changes are more radical than just adding more cloned servers and may even require some boldness. In the end, you can choose from 2 paths:
Just recently I was asked what it would take to make a web service massively scalable. My answer was lengthy and maybe it is also for other people interesting. So I share it with you here in my blog and split it into parts to make it easier to read. New parts are released on a regular basis. Have fun and your comments are always welcomed!
Part 1 - Clones
Public servers of a scalable web service are hidden behind a load balancer. This load balancer evenly distributes load (requests from your users) onto your group/cluster of application servers. That means that if, for example, user Steve interacts with your service, he may be served at his first request by server 2, then with his second request by server 9 and then maybe again by server 2 on his third request.
During the last weeks, I have been interviewing possible engineer candidates. One of my questions is about the knowledge of readme driven development. Interestingly, not one candidate had ever heard anything about readme driven development, but nearly 90% could precisely explain what test-driven development means.
This small statistic surprised me and opened my eyes to the fact that our implementation of readme driven development at 6Wunderkinder must be something new in the IT industry. So I dedicate this (longer) blog post for a further insight into how readme driven development may be done under real-life conditions.
… Forrester says that the cloud will be the third major client software battleground. The PC operating system was the first, won early by Microsoft with niches carved out for Apple and Linux. Mobile is the second and remains fluid and volatile with Google’s Android leading in market share with Apple, Research in Motion and Microsoft figuring out how to gain ground. The personal cloud will be the third and will be built on top of the first two. Hence, the companies with strong infrastructure in operating systems and communications will be the leaders in the personal cloud as well….
Very interesting, read more at ReadWriteWeb
With the just recently introduced Route 53 DNS Service by AWS, you can now route your base domain (example.com) to your Elastic Load Balancer. In the past you had to do wild redirecting stuff to www.example.com to redirect to your ELB which was really annoying.
Also EC2 overally becomes now more safer because you can completely hide an instance behind a load balancer. Just assign the load balancer as part of a security group and you’re ready to go!
For more information please see Werner Vogels blog post at http://www.allthingsdistributed.com/2011/05/aws_ipv6.html
I finished a long planned presentation about Redis and how we use it for the backend of Wunderlist and Wunderkit.
Redis is used for more things than “boring” caching. We even started to use Redis NOT as a cache, but as an easy way to gather statistics about Wunderlist. Later we just added a small cache layer to Wunderlist.
With our next product, called Wunderkit, we will extend the use of Redis. We use it for logging, caching and queues.
AWS CloudWatch is a great service by Amazon to monitor your server metrics. Now, you can use the Cloudwatch API to store your own business metrics in Cloudwatch. Why you should do that?
Because Cloudwatch offers a convenient visualization of data but even more important, it offers Cloudwatch alarms. For example, you can get an email when your registered users suddenly increase by 10% instead of 1% per day.
That’s somewhat funny. While listening to the presentation “High-Scalability with AWS” by Matt Woods, AWS Evangelist, at the Amazon Web Services Tech Summit 2011, I received the Cloudwatch Alarm that a bunch of our servers reached a certain load peak. Which meant: time to scale!
And while Matt presented slide-by-slide how to add a EC2 instance to a ELB to horizontally scale, I actually had to do the same, but with 2 instances instead of 1.
The whole stuff took me less than 5 minutes and I was even faster in real-life than Matt with his slides :)
I found an interesting reply by inventor of Erlang Joe Armstrong. It was postet in a Google Group in 2010 and was about the difference to Node.js:
You have to ask why was erlang designed? why was node.js designed?
I don’t know why node.js ws desiged - I guess so you could write servers in js.
Erlang was designed for building soft real-time fault-tolerant systems that could be upgraded without taking them out of service. These design criteria led to erlang features like:
- fast per/process garbage collection - ability to change code on-the-fly (ie the module reload stuff, with the ability to run old and new module code at the same time) - several orthogonal error detection mechanisms (catch-throw/links/ …) - cross platform error detection and recovery (ie to make something fault tolerant needs at least 2 machines, think the case when one machine crashes - the second machine must be able to take over)
In the erlang system there is quite a lot going on behind the scenes to make sure this all happens without the user being aware of it - to first approximation you can spread processes and database tables over multiple nodes and it will behave in a reasonable manner …
I don’t think things like have any correspondence in node.js - I guess if an entire node.js node crashes the user would not expect another node to take over in a seamless manner.
The fun stuff in Erlang has to do with how the failure model interacts with code changing, moving code around, upgrading code without stopping the system and so on - these characteristics are extremely important if you want to build a 24x7 system with zero down time - less so if you just want to serve up pages as fast as possible and don’t care if you take the system out of service for upgrades or errors.
Erlang was designed for building fault-tolerant systems - node.js was not
While searching for a Redis client for Erlang, I stumbled upon this nice set of slides including explanation about Redis: http://simonwillison.net/static/2010/redis-tutorial/
P.S.: I finally found a good Redis client for Erlang here: https://github.com/JacobVorreuter/redis_pool
The handling of several security policies for AWS offers was until now always a terrible pain. Since some days, Amazon offers the management of policies, users and groups in their comfortable console web interface.
And the best is the new Policy Generator which makes the creating of policies even to somewhat like fun: http://awspolicygen.s3.amazonaws.com/policygen.html
I’m a huge fan of Panic. And now they even dive into iOS-development and come up with a very handy tool to work or monitor your servers while being on the road (or at a conference). The App is called Prompt and available for $4.99 at the App Store.
Node.js is an awesome for serving static files or doing realtime apps. Just yesterday, I run into the challenge to deploy one of my Node.js scripts with Capistrano.
Since Node.js automatically ends if you close the terminal connection, I searched for a convenient tool of “daemonizing” Node.js. I found and successfully implemented the Node.js module “Forever” which also allows the restarting of a Node.js daemon.
P.S.: Here are some other great Node.js modules/packages
Adrian Cockcroft, Cloud Architect at Netflix:
The key challenge is to get into the same mind-set as the Google’s of this world, the availability and robustness of your apps and services has to be designed into your software architecture, you have to assume that the hardware and underlying services are ephemeral, unreliable and may be broken or unavailable at any point, and that the other tenants in the multi-tenant public cloud will add random congestion and variance.
In reality you always had this problem at scale, even with the most reliable hardware, so cloud ready architecture is about taking the patterns you have to use at large scale, and using them at a smaller scale to leverage the lowest cost infrastructure.
- Always let your app live in more than one availability zone, even for the costs of performance (you can surely overcome these short latencies with some brain)
- Critical stuff should be safed on instance storage - even if that still sounds odd for me
- Even AWS may temporarily fail - accept this fact and prepare yourself
Read about the whole snafu at the Reddit Blog
Just too awesome, try for yourself: http://www.mongly.com/
Come and join our team at 6Wunderkinder in Berlin!
Current job offers at http://www.6wunderkinder.com/we-are-hiring/
What distinguishes a good software engineer from a great one? 17 answers on Quora
What distinguishes a good software engineer from a great one?