Last week, my old wireless router died on me. topcom routerActually the power supply stopped working. It was a topcom router, I bought at aldi, for just under 80€ a little more than three years ago. So the warranty had only just expired for a couple of months. It had been acting strang lately, losing dns-caches and disconnecting wired clients, for no obvious reason. I was still very happy with it. It was cheap, fast (108Mbps wireless), secure and reliable. It came with a wifi-usb key too, which I rarely used.topcom stick If someone has a spare 5.0V  DC 2.0A power supply, just let me know 🙂 I used to experiment a lot with that little router. Once I made a parabolic signal director for it, boosting its range to up to 300m.

So last week, I searched for a solution. I wanted a cheap, but comprehensive solution. Something I could use to play with a bit, without spending too much money. I started reading some reviews, and found that the Linksys WRT54GL is a favourite among open source firmware enthousiasts. I found that it looked ugly and was more expensive than the WRT54G2. I found that the WRT54G2 too is supported by DD-WRT, a fully featured open source firmware. So I ordered it from routershop.nl, and two days later, yesterday, it arrived by mail.

This device looks nice, an important point for me, because It lives next to my TV, in the living room.
This device looks nice, an important point for me, because It lives next to my TV, in the living room.

I played with it a bit, but found that out of the box, the possibilities for configuration are rather limited. So I flahed it with the new firmware. This page details the process, which is actually rather painless. There were only two (known) problems I encountered. The first is that the tftp program for sending the new firmware to the device didn’t work under windows Vista. This was not a problem for me, I just booted into XP and everything worked as described. The second problem was that after setting a new password, I couldn’t save any of the settings I wanted to change. I did the 30/30/30 reset, which means pressing the reset button for 90 seconds, while killing and restoring the power to the device after half a minute. After that I was able to configure the device to my liking.

I was very impressed with the possibilities the DD-WRT firmware opens for such a cheap device.

DD-WRT control panel. Very user friendly, every setting at your fingertips.
DD-WRT control panel. Very user friendly, every setting at your fingertips.

In a next post, I will describe the network topology I realized with it. I used to need an extra network-switch to give an external IP to my telenet digibox/digicorder.

On 11, 12 and 13 May 2 colleagues and I went to london for the Progressive .NET exchange organized by Skills Matter. It turned out to be a very interesting conference, with many renowned speakers from the open source .net scene. The conference had 2 concurrent tracks to choose from.
On the first day I chose to go to Gojko Adzics sessions on specification by example and fitnesse. In this session Gojko presented the idea to use examples to drive requirements specification. It was a very intense workshop that used the game of Blackjack as an example to make the point. In small groups we were asked to write a full specification for the rules of Blackjack using just examples. It turned out to be very useful to use examples, as it presses on the importance of “the what” and “the why” over “the how”. In the afternoon session we implemented Fitnesse to automate the acceptance testing for the application. I had never seen Fitnesse and it was funny to see some concepts converge. The easiest way to describe Fitnesse is to say that it is a wiki with green and red bits. You write wiki pages to describe the acceptance criteria for a project, and you specify examples to back up the description. You use tables to layout the examples, which can than be automatically verified by Fitnesse. If someone is interested in what this is, and how it works, I might be able to give a short demo during one of the next KSSs.
fitnesse-page

Tuesday morning I chose the Robert Pickering session. It was on the programming language F#, a topic on which he has written a book and has extensive knowledge, but the session was more on syntax, and he’s not a very good speaker. A little bit of a disappointment. The afternoon I went to Ayendes Advanced NHibernate workshop. It was very intense. He tackled 25 topics in 4 hours, answering questions from the audience. From caching over security toward meta data. Later that night we went to the alt.net UK beers, which was a very interesting experience. It was a dynamic discussion based on topics suggested and elected by the audience, hosted in the cellar of a pub, with lots of free beer. People shouting, giving opinions and laughing. Very fun indeed.

Wednesday – the final day – I attended the sessions of David Laribee. Towards a new Architect, Lean, Kanban, Team Values, Diverging and converging brainstorming specification sessions. These sessions were very practical. In small groups we discussed team values – defined what matters most in a development team, where the priorities for all stakeholders essentially lay. We worked together to define a new product – a medical device to manage patient data and facilitate doctor interaction and his daily work flow. We did this using a “design storm”. First everyone created something individually, then we came together in small groups of five and compared the results. Made a group synthesis and presented the results. Then the entire group discussed the results and moved towards an agreed upon result. In the afternoon, we continued to build a product, but this time a site to play a fantasy soccer strategy game, based on the real soccer players and results from matches. It was a lot of fun, and the exercise tried to show how agile, lean and kanban can be useful in software planning and development.
Other interesting subjects that came up during this sessions were the pomodoro technique, Towards new Architecture, ten usability heuristics, …

The only downside of the conference was the dodgy Internet connection. Otherwise it would have been even more enjoyable and interesting. Overall it was very instructive and lots of fun!

See what other people had to say about it.

Cloud computing is a model where big corporations host and manage an IT-infrastructure (and offer services to clients, obviously). These big corporations have to invest heavily in big en powerful data centers to be able to host Internet-scale applications.

At work, we have been investigating and discussing Azure, which is Microsofts future cloud offering, in a Special Interest Group (Led by Yves). The most important business cases where it would make sense to use Azure is where customers don’t want to or cannot invest in infrastructure to try out an idea that could potentially become popular.

The Concept of Fail Fast or Scale Fast is important in this respect. Start ups can put some innovative features online, if they catch on: superb, scale it by adding more nodes to handle the traffic. If they don’t catch on: too bad, take it back offline. Other interesting cases include services that are heavily spiked. For example a concert-ticket sales application. Usually all tickets for popular are sold out in 24 hours of opening sales. To handle these spikes, companies have to have a lot of excess capacity, which is not used most of the time, but can hardly handle the spikes when they occur. In such a case the service could just add a large number of capacity for a small period of time.

In the case of Azure, Microsoft built a software-fabric on top of a bunch of connected systems. This allows computing nodes (virtualized servers running the application) to be redistributed and managed within the data centers as Microsoft sees fit (for example to optimize temperature within a datacenter). There is no guarantee that any node will stay up at any point in time. Software developers have to take this into account when designing their software. There are no transactions (in the classical sense). A classical relational database is hard to use. It has to be designed upfront to be scalable. Reliability and Availability are created through the use of replication, partitioning and smart routing.

What about turning it all upside down.

Think about this: instead of limiting the fabric to the corporate data centers, why not take advantage of the “entire” Internet in a peer-to-peer kind of grid. We are all in front of machines that are largely overpowered most of the time, so why not let cloud-provider use this excess capacity in return for a small fee or other compensation to cover the energy bill. You could allow a portion of your own system to be taken up by a virtual machine, which is managed by the cloud-provider, you don’t have to worry about it. It’s there, eating away your idle CPU-cycles — or not. The cloud-provider pays by use, noting less, nothing more. If for example Microsoft would go this way, they would not have to do so much work. Their own processing nodes too can fail, and the fabric is very capable of handling such cases. The only thing they would have to do would be to support more types of hardware (virtual-PC runs already on a lot of systems out of the box). Create some infrastructure (set up the p2p network). Invent some smart algorithms to take advantage of locality in the network plus basic resource management.

Something similar already exists in other context. Think about large-scale 50.000+ node bot nets (used to relay spam, ddos, …). Think about BOINC.

Why not try to take advantage of all those wasted CPU-cycles? I think this can have huge advantages in the future. Due to the distribution of the load, cloud-providers have to worry less about cooling, energy, availability (if the network and management can be really distributed and self-healing). A possible problem is that it is not so easy to predict available capacity, but they could restrict the use of off-premise nodes to the cheapest best-effort SLA’s (no hard guarantees in the service contracts).

Another thought: The cloud-provider could try to move the load users are generating back to their own machine, if they are sharing resources. If someone is using a web-application, why not host the compute intensive load on their own machine, making the use of the service cheaper. This is where I come full-circle;

Any feedback is hugely appreciated.

Today I came across an interesting offering: cheapvoip.com

This is a provider of voip telephony. This makes it possible to call fixed landlines in belgium for free from a computer (free software client, comparable to skype) or supported sip enabled device.

In practice this means anyone can call me for free (you have to create a free account, and buy credit for at least 10€) on my company phonenumber, which is routed automatically and for free (costs paid by my employer) to my cellphone.

Another interesting offering is phone2phone voip calls, where you let cheapvoip set up a call between two parties at interesting rates. Usually this is cheaper than calling yourself, especially for cross network calls.