Skip to content

All about PostgreSQL – the world’s most advanced open source database

2011 January 26

PostgreSQLSimon Riggs is the Founder and Chief Technology Officer of 2ndQuadrant. He is also a major developer and committer of the PostgreSQL project. Simon has contributed major features in each of the last six versions of PostgreSQL. His work includes recovery and replication, performance and monitoring as well as designs for many other features.

I met Simon and some members of his team during the DevCon and Jax Conference in London. During the social meeting, we talked about databases and my ignorance of PostgreSQL. After spending so many years in Tech Publishing, I always thought that even though I do not understand the technologies, I have a fair idea of what they are about but then found out that I knew practically nothing about PostgreSQL…

 

2ndQuadrantWhat is the history of PostgreSQL?
Postgres started as a research project at University of Berkeley, California in the late 1980s. Mike Stonebraker’s research team went back to basics to design the datastore of the future, including items like user defined datatypes, lock-avoiding concurrency and a DBMS designed for multi CPU architectures.

That project was eventually commercialised as Illustra, but that’s another story. The main event here is that the original Postgres code was BSD open source and so various pioneers took on the code and started fixing the bugs, and adding new features.

The community has grown simply because of the project policies:
•    There isn’t just a single company involved, it’s a community, so everybody benefits from helping get new people involved.
•    All contributions get credited. You can see who did the work, so your efforts are rewarded.
•    Developers answer users’ questions on mailing lists and IRC, which helps new adopters and encourages people to ask challenging questions and report bugs.

I am surprised to hear that PostgreSQL is 20 years old and is regularly updated, when so little is known about it outside of the community. How do you explain this lack of noise?

The lack of noise is because of the lack of fully funded marketing departments.

In terms of “PostgreSQL awareness”, if I go to an open source computing conference and ask “have you heard of Postgres?” almost everybody has. Some people know us as “Postgres”, some people know us as “PostgreSQL”. Both names are used, even though the official name is PostgreSQL. So the tech staff have heard of us, it’s just non-technical management that may not be aware. But you don’t need 100% awareness to be short-listed, and we’re certainly popular with CFOs!

And in the last year, the question “do you use Postgres?” now gets more than 50% success as well.

I understand that PostgreSQL is updated continuously – who does the upgrades? If you have a lot of developers working on the project who decides which work to integrate in the product?

PostgreSQL does a major release once each year.

The development process is about “open engineering”. We discuss a need, then design a solution to that need, then implement it. Then we peer review it, rewrite bits and bring the code up to rock solid quality levels. People often specialise in particular parts of the process, some people have great ideas, others have great designs. Then we have great coders, great reviewers and thorough testers. The driving force is about “can we make that better?”. For the contributors, there’s serious kudos in being able to suggest a better way. It’s mostly fairly polite and since people give credit, it encourages everybody to help.

There are very few actual “committers” on the project – people who can add code to the repository. Many people develop features for PostgreSQL, typically about 100 authors in any one release, contributing a few hundred features. Most of the bigger features are written by a smaller group of major developers, not all of whom have reached committer status yet.

In terms of “who decides” we work on a consensus basis. If somebody has a reasonable and rational objection, then we need to take that into account. Sometimes that means patches are rejected, but often it means sections of code are added or a feature changed slightly.

The dev process works well, so we end up focusing on the features people really want and make sense, not wasting time on “marketing features” – ones that sound great but aren’t properly implemented, or that nobody would ever use.

Why should one use PostgreSQL instead of Oracle or MySQL or any other competitive databases?

Well, first, price. PostgreSQL is completely free and the free version is also the latest and best version.

Second, cost of ownership. PostgreSQL follows the SQL Standard very closely, so you’ll be able to reuse skills easily from other databases.

PostgreSQL is also low on DBA overhead, with many people reporting having used it untouched for years. High volume and complex databases need more love, its true, but we use the hardware efficiently and don’t waste people’s time with pointless optional parameters and manual tasks.

With PostgreSQL, “it just works” we say. We add many advanced features, yet they work in seamless ways so you can use them easily. For example, almost all database definition SQL (known as DDL) is transactional, so you can use those statements anywhere without having to think about transaction boundaries. The PostgreSQL TEXT datatype is optimised for anything between 1 byte and 1 Gigabyte column sizes, yet they all work the same – no need to use a different datatype for > 2K or > 32K. What’s even more important is that the underlying code is well designed and well documented, so that we don’t spend months refactoring at the start of each release.

Third, or maybe this should be first? Advanced features. PostgreSQL has an advanced optimiser, with many different types of join plan and 4 different types of index:

•    Array types that really work and are indexable.
•    Server-side programming languages such as PL/pgSQL, plus options to write stored procedures in Python, Perl, java, tcl, R, Ruby etc..
•    Extensible datatypes and operators that allow implementation of full text search, high performance GIS systems and many other datatypes.
•    We also support recursive queries, windowed aggregate queries, plugins of all different kinds.

Fourth, the code is robust, with very few security flaws and bugs.

And lastly, it performs very well, with some reports of people getting better performance after upgrading from commercial databases.

With Oracle buying MySQL, do you feel that PostgreSQL will/has gain market shares?

PostgreSQL has always been favoured by people that come from an Oracle background. It’s very similar, with advanced features, non-block locking and similar stored function languages.

People often ask the “MySQL question” as if somehow PostgreSQL and MySQL have anything in common. Apart from being open source, the two projects are radically different in terms of organisation, typical use cases, feature set, code quality, code licence and others. PostgreSQL is the database DBAs choose, if they can.

MySQL is a little confused now. It used to be this fast, mostly read only database. Now there are lots of different forks, all doing slightly different things. It appears to be able to do anything: it’s advanced, it’s simple, depending upon which fork and which plugins you use – but you can’t use them all at once. But you’re right – if anybody was using MySQL because of its feature set, they’re certainly coming to PostgreSQL now because of the licensing situation.

PostgreSQL does one thing well: General Purpose read/write SQL database.

That means you can throw all kinds of database designs at PostgreSQL and it copes with them well. Why? Because when we implement a feature, we implement it fully and we tune it as well. The PostgreSQL community is working together in one direction – to enhance a real-world database that works effortlessly and efficiently on a range of different workloads. So if you pick PostgreSQL for a project you can be sure you won’t be limited by feature set, and the price, performance and robustness are all there too.

It’s not really surprising there is a huge growth in usage in recent years.

I keep hearing about NoSQL databases. What are they?

Well, I see NoSQL databases as providing these main things:

•    A stripped down database that focuses on speed above all other features.
•    By simplifying the range of actions possible, NoSQL databases are able to scale sideways much more easily than databases running more complex workloads
•    Many also provide the ability to add new data elements easily without issuing data definition statements

Those are all valid lessons. But they apply equally to any datastore, so they apply to PostgreSQL as well. What I mean by that is that with the right set of options, PostgreSQL is very fast as well. And with the right architecture, an application can scale on any product, so I don’t really see the point in deliberately removing features you might need in the future.

How come CouchDB, FluidDB, MongoDB etc. are so much in the news at the moment?

New things are sexy. Nobody wants to hear that the way your Dad used to manage data is still the right way, in most cases. Especially when it turns out that what you just suggested isn’t new at all, and that your Granddad used to manage data that way too before he gave it up.

Is there a relationship between the growth of NoSQL databases and the growth of social media?

Yes, but not all social media applications run NoSQL. Far from it.

In the past, IT was mostly about billing and number crunching. Your electricity bills, airline tickets and banking transactions. In recent years we’ve seen a move towards datastores keeping track of lower valued information, like web clicks, and general internet usage data.

That has meant a couple of things:

•    “Mostly read only, very fast response” became an important type of application for databases – where MySQL became famous.
•    Massive scalability is now a requirement for top applications in many companies. But just because Facebook uses XYZ for its main site, it will still use other types of database for its other applications.
•    Rigid control over data quality is less important than the ability to bring new features to market quickly and easily. With apps available 24×7 that means down time is nearly impossible. For major apps, we also need more than a few hundred data columns.

To which PostgreSQL has responded. In version 9.0 there is a new version of “hstore” which allows you to store many attribute=value groups for any key, allowing flexibility without downtime. You could already add or remove columns in seconds even on large tables and add new indexes concurrently. In 9.1, we’ve reduced the lock strength of many ALTER TABLE commands, so you can add foreign keys and partitions without blocking SELECTs.

PostgreSQL has for years had the PL/Proxy framework for infinite scalability, as used by Skype and MyYearbook.com. Essentially this is just the same architectural rules as the NoSQL folk, but using PostgreSQL, so you can take advantage of the other features as well.

How do you make a living out of PostgreSQL?

I’ve worked with databases professionally for more than 20 years, so becoming a PostgreSQL professional was more about switching from using other databases. In 2004 I started doing PostgreSQL training and things took off from there really. Soon, I was full-time solely on PostgreSQL.

Personally, I do a lot of work for companies that want additional features into core PostgreSQL, or various kinds of plugins. My company does all the things our customers request: 24×7 support, remote DBA, tuning, training, configuration, systems management, high availability… there’s always a need for expertise.

In September, PostgreSQL version 9.0 was been released, what are the most prominent features in the new release?

•    Streaming Replication
•    Hot Standby
•    Join Removal (for ORMs)
•    Default Privileges
•    64-bit Windows support
•    New security features
•    In-place upgrades
•    Improved user defined functions in perl and Python

…Safer, faster, easier to use

How do you explain that so few books have been published on PostgreSQL?

That’s down to the publishers, I guess. I’ve been trying to write an advanced book on PostgreSQL for 5 years. Finally, we’ve published these two books.

•    “PostgreSQL 9 Administration Cookbook” Simon Riggs and Hannu Krosing ISBN 978-1-84951-028-8 (PACKT Publishing)
•    “PostgreSQL 9 High Performance” Gregory Smith ISBN 978-1-84951-030-1 (PACKT Publishing)

Both of those books are selling “very well”, I am told by those that know.

Most of the books I found on Amazon are mostly over 5 years old.

True in English, but Germany and France have had good new books in recent years. It’s down to publishers more than authors or readers.

PostgreSQL is mature, so yes, books have been written over the years.

The software is changing fast, so many of the older books are unfortunately out of date in many respects.

Can you mention some of the big users of PostgreSQL?

NTT, Skype, Afilias, Nokia, …

… there are many more, mostly hidden by non-disclosure agreements. Put it this way: 2ndQuadrant’s 16 technical staff are pretty busy, and we have many large customers.

Almost all providers in the Telco space use it, since they have deployments of hundreds or thousands of servers. That’s both service providers and equipment manufacturers.

Many companies get big discounts from other database suppliers. Many of them aren’t willing to risk losing that discount by voicing support for PostgreSQL, especially since we can’t offer a discount on the licence – the database is already free.

I hope this post has highlighted some of the key features of PostgreSQL but please feel free to make comments, ask questions as I am sure Simon will kindly answer your questions.

JAX London 2011 – Early Bird Discount for GMT Readers

2011 January 25
by admin

JAX London logo 2011JAX London, the UK’s premier Java developer conference, takes place 11th – 13th April, 2011.
JAX hosts an impressive cast of big-hitters and exciting thought leaders, covering the most important and latest subjects in the corporate Java development ecosystem.  They’ll offer up a packed schedule of tutorials, workshops and technical presentations for you to gorge on.  These guys are in demand, and JAX London will be your only chance to see them all in the same place, at the same time, in 2011!!

Readers of O’Reilly GMT can get a special discount on delegate places.  We’re offering a 20% discount to anybody who registers using the promotion code JAX11G1.  You can apply the discount right up to the event start date, and best rates available now by using the code alongside Early Bird prices – available until 25th January. Meaning you can save £180 on standard 3-day rates.

Go to www.jaxlondon.com for more information.  Speaker and schedule updates will be posted on the website.  You can also get news, reviews and in-depth analysis on everything in the Java ecosystem by visiting www.jaxenter.com

For further information contact on JAX London contact Mark Hazell.  Email markh@jaxlondon.com Telephone +44 (0)20 7401 4845

Fatboy In A Lean World

2011 January 21

Overview

This is the first of three blog posts about lean software development. Most material on the subject focuses on the what and the how – what to do and how to do it.

These posts will focus on the why – the thinking behind it – and to illuminate the why with practical experience and, critically, data. Lean production is about metrics as much as anything.

About the Author:
Gordon Guthrie is the CEO/CTO of hypernumbers.com,
a cash-flow positive startup in Customer Development.
Hypernumbers are a Seedcamp company.
Gordon previously wrote Erlang: The CEO’s View for GMT.

The first post will look at some background about lean production, and try and synthesise some of the various viewpoints on the subject into a coherent world view.

The second post will focus on particular technical aspects of lean and look at particular issues in the cost of producing software – with metrics.

The third post will look at business aspects of lean, and bring examples of how lean production techniques – the rigorous elimination of waste – have had practical impact on a number of business sectors. Again this post will bring data to the table.

Introduction

Over the last year or so the whole tech world has gone lean crazy. As usual, it is a mixture of hype and fashion wrapped around a steely core of good practice.

Let’s start with some data. Rob Fitzpatrick at the Startup Toolkit1 did a survey of people who were signed up to lean mailing lists – in other words the self-proclaimed stormtroopers of lean – and the overwhelming majority of them had read none of the canonical texts about Lean Production. This is not a good thing or a bad thing: some people are signed up because its trendy and some are intrigued and want to learn – and many are both.

A side-effect of the lean scene being immature is that a lot of nonsense is being spouted about lean. These articles will be practical, aimed at people who want to learn more, and will try and help sort the wheat from the chaff.

My obsession with lean production dates back to the heady days of Web 1.0. I was Chief Technical Architect at if.com – we had 500+ developers and spent money accordingly (think drunken sailors).

It was a real eye-opener – an effectively unlimited budget – one particular day I spent more or less $1m before lunch with no budget, no studies, nothing more than “I need this, today”. For a short period we were starting 35 developers a week; and yet we had a famine amidst plenty. We were struggling to make progress despite all our ‘advantages’.

We struggled, and went live, and took 10% of the UK mortgage market in 18 months. It was a success, but it left me with clear feeling that there was something very wrong with my understanding of production.

In the rest of this article I will talk through some key issues in lean and then in subsequent articles I will try and illustrate them with practical examples from my ‘fat’ years at if.com and my famine years at hypernumbers.com.

The father of quality, W Edwards Deming once said “In God We Trust, all others must bring data” – I will bring as much data as I can reasonably share.

What Is Lean?

Let’s begin by putting in place a framework to discussion lean software development. There is an emerging business stack which is shown below:

Emerging Startup Business Stack

When do we do things? Methodology Stephen Blank2/Eric Reis3 a statement of dependencies – if you don’t know what you are selling yet how can you sell it, etc, etc
What are we trying to achieve? Objectives Sean Ellis‘s4 Startup Pyramid a statement of the conditions that, if met, let you move onto the next stage of the methodology
What are the customers doing? Metrics Dave McClure5 an idealised representation of the customer lifecycle that you can test against
Why are the customers doing what they’re doing? Analytical Methods Dave McClure Quantitative, Qualitative, Comparative, Competitive
How do we know why the customers are doing what they are doing? Tools and Techniques A million different companies
Who does what? Tasks Organisational Structure

Lean is the methodology in this stack. A methodology is simply ‘this is how we do stuff’ – it combines formal processes and procedures with informal or cultural ones.

People say stuff like “we don’t do procedures, we just do the right thing” and other statements which play up the elite start-up persona. That’s guff – you need to know the methodology – even if that only means reading Four Steps To The Epiphany.

It’s called ‘lean’ for a reason (it is short for lean production), but lean is a relative and not an absolute term. If someone says “we have a lean start-up” they don’t. If they say “our start-up is lean relative to…” they might – provided they have an appropriate comparator and data.

It is important that you are comparing apples to apples. Consider that poster boy of the MBA circuit SouthWest Airlines. Their customer proposition was making it cheaper to fly from Des Moines to Oklahoma City than to drive – the customer proposition is that it is cheaper. But because lean production addresses production costs, the lean comparator for SouthWest Airlines is other airlines.

SouthWest Airlines is a success because its lower production costs enabled it to make more obscure airline routes cheaper and more convenient than driving whilst still remaining profitable.

Lean is not cheap, lean is cheaper – but only in the aggregate. Lean production doesn’t consist of doing everything cheaply. It mostly consists of not doing stuff and doing what you do at the most appropriate price.

Principles Of Lean

Overview

Fred Wilson had a great article6 on the key tasks of a CEO:

  • don’t go out of business
  • hold the product vision
  • build and motivate a great team

The first point is the first principle of lean. Boiling Stephen Blank’s Four Steps To The Epiphany down until you can write it on a beermat:

Customer Development is spending as little time and money until you can repeatedly and reliably walk into someone’s office and say ‘I can save you X, let me show you how’

Executing customer development is ‘only’ knocking down all the unknown-unknowns until you get there. So when it comes to estimating time and money to get there – you are on your bloody own.

How long will it take? Unknown. How much will it cost? Unknown. What happens if we run out of money? That’ll be a known.

Don’t Do Work

The key principle of lean is don’t do work – eliminate waste. That is both a positive and negative thing.

The (easy) first half of it is slash and burn, find’n’kill, old fashioned cost squashing, negotiation, shopping around and so on.

The second half is the more complex – it is identifying activities which accumulate to cost. Taiichi Ohno of Toyota used to use a technique called the Ohno circle to do this. He would draw a chalk circle on the floor in the middle of a workshop and get people to stand in it for the whole day and watch what people actually do. In the context of a car assembly line that means walk from here to there and back, reach for this and stack that. Then they would work on reducing that cycle time at a micro-task level.

The third half is more complex again. Eliminate this task at the micro level by changing that other micro task or tasks. The problem here is that the totality of a car assembly plant production line is too complex to be held in a single head. Toyota has a complex culture. On the one hand workers, at a task level, appear to be dehumanised automatons – the tasks are choreographed like ballet. On the other hand, employees are valued and cared for and rotate through different tasks on the line. The skilled empowered knowledge worker is then encouraged to say “if we added this task here we could eliminate 5 tasks that we do later on in another station”.

Physical labour can have a performance aspect to it, in a way that typing on a computer doesn’t. When I worked offshore as a lad, I used to watch the roughnecks grabbing an 80 foot drill string hanging in the derrick with tongs, stabbing it onto the rest of the drill string, casting a chain onto it and spinning it in: a team operating almost at an autonomic level. Toyota aims to have a combination of that ‘work as performance’ combined with a critical intelligence about the whole system in as many people as possible in its workforce.

The fourth half of this process is yet more complex – eliminating micro-tasks by reconfiguring the whole overall flow of the work pattern – having understood the micro in sufficient detail it becomes possible to ‘refactor’ a complex workflow.

You’ll notice that the halves of this process are mounting up – do less stuff is a bit harder than it seems.

The fifth half is knowing when to increase costs to reduce costs. The Japanese notion that the way to lower costs is to pay higher wages spits in the face of the cruder more reductionistic end of Anglo-Saxon capitalism: which is a shame because they learnt it from Henry Ford. Likewise is increasing the training budget – or Facebook’s policy of paying people higher salaries if they live within a mile of the office: staff are more valuable if they don’t commute.

The sixth (and final) half is to embrace insanity – organise your factors of production to bring problems to the attention of management as quickly as possible – this is known in the trade as production-levelling.

Consider a car assembly line that produces 200 cars a day – you have to make 100 four doors and 100 two doors. So the ‘easy’ way is to do 100 two-doors in the morning and 100 four-doors in the afternoon. That way you can ‘optimise’ things for each production run. Well it turns out the right way is the hardest way, so let’s do them four-two-four-two. Then throw in each run should be half-red and half-black, and half with air-con and half-without, and half with walnut dashboard and half without. Yup, a black two-door with walnut and no aircon follows a red four-door without walnut but with air-con.

The point being if you can do that then you have got to have solved a gazillion problems about workstation setup time, and colour swapping time, and component ordering and restocking and loads of other wasteful stuff that you haven’t thought or identified yet.

And the interesting part of this is that in fixing the detail, and reducing costs, you uncover new, previously unviable business models. We will see practical examples of that in the third post.

The point is not to make life difficult because it makes you morally better (let’s make all developers work standing on one leg) but choosing to organise your factors of production to make problems surface as early as possible and then fixing them as soon as possible. If there is a hurricane between you and where you want to go to, then sailing straight at the hurricane is the right thing to do.

The end result of this is that when you walk into a showroom in Japan and talk about buying a car, you run through the options with the sales staff. And when you complete the order, your specific car starts on the assembly line and, as it rolls off, it is driven to your house.

Don’t Build Things People Don’t Want

The YCombinator slogan is build things people want. Lean production is the double negative of that: dont build things people dont want.

This also has its root in Japanese production methods – the focus is on customer-pull rather than product-push – known in the trade as kanban.

The best way to explain this is to consider a tale of 2 computers.

In the first case you go into PC World (or whatever the big computer and hardware superstore is in your neck of the woods) and browse the computers on display – a sleek tower, a chunky desktop, a slimline netbook. The manufacture has churned out a set of machines and pushed them to the shop for you to buy.

In the second case you go online to a firm like Dell and order a new machine online. As soon as your order is placed the actual machine configuration that you specified (this much memory, these size disks with this spindle speed, that specification of optical drive) is then assembled to order on the line. And as a consequence replacement parts are pulled from the supplier based on your demand.

Lean Customer Development on the internet is about moving software development from push to pull.

Instead of bundling up some new features, pushing them to live and having a version 2.0 party, you try and get customer signals by a variety of mechanisms that ‘pull’ features out of your team.

Don’t Write Code

Again ‘don’t write code’ has many sides. The first (and easy part) is to be aggressive in your use of already available software, ranging from:

  • operating systems
  • cloud provision
  • infrastructure software
  • databases
  • webservers
  • etc, etc
  • libraries
  • ajax
  • authentication
  • etc, etc
  • software as a service
  • analytics
  • application/infrastructure monitoring
  • OAuth signon
  • etc, etc

The principle here is the old one of ‘write what gives you competitive advantage’ and ‘buy what doesn’t’.

The second part of ‘don’t write code’ is considerably harder. A key reference here is Paul Graham’s essay Beating The Averages7. Any bloody idiot can write code – the secret is to build a team of developers who specialise in not writing code and use all any tools available to help you not write code.

By that I mean a team with a strong architectural sense who can decompose your business problems into the smallest and cleanest set of sub-systems with the minimum numbers of edge and special cases. Aggressive use of techniques that work at higher and higher levels of abstraction – meta-programming, domain specific languages and so on and so forth – the whole notion of using the most powerful tools available to the best effect.

But critically it also means building a business team who can ruthlessly strip business processes to their bones. An example of this would be tripit.com who reduced their service to emailing your itinerary e-mails to them – they would then create an account, register you and process your stuff just from that. The sheer brilliance of that, measured in lines of unwritten code, is immense. The less work the customer has to do, the less code you have to write to support them.

There is a third and much less glamorous part is the whole question of quality. There is an old industry saw ‘there’s never time to do it right, but there’s always time to do it twice’.

Lean start-ups are positively encouraged to build up ‘technical debt’ – indeed to go heavily into technical debt – but little is written about how and when to go into technical debt, and which debt, and to whom, not to incur.

The economics of technical debt are well understood. It is about 3 times as expensive to do something properly as it is to bodge it the first time and about twice as much again to fix it after the fact.

Technical debt only makes sense in two circumstances:

  • when someone else is going to pay for it
  • ‘if we can achieve this we can execute a funding round that will make an order of magnitude change in our resources’
  • when you intend to default on the debt

Consider if you have 10 features you want to try out with can each be bodged in for $1k (with a clean-up cost of another $5k each) or done right first time for $3k.

You do all properly and it costs you $30k and then you throw away 90% of your work.

Or you bodge all 10 in for $10k, throw away 9 of them and then spend $5k fixing up the bodged winner – you are $15k up.

If your analysis shows that nobody else will pay back your technical debt, and you can’t default on it, you are always cheaper to do it once, do it right.

(In an exploding competitive market you have to throw in market risk in the equation – but that is just a special case of someone else paying – in this case the customer base you would otherwise lose.)

Why Has Lean Emerged So Strongly?

The great transformation of the industry over the last 10 years has been the availability of already-written (and battle tested) software which has slashed the size of a team that can build a functioning product and the crash in price of hardware.

Back in Web 1.0 at if.com we had a pair of Sun E10K’s as our core machines – which rolled in a £1.5m a pair. We weren’t VC backed, we canned £60m from Halifax, but we had some doings with Paypal who were VC backed. They had an E10K as well8.

The entry cost to a web 1.0 firm then was about $1m in hardware plus staff to build things. The structure of the VC industry simply reflected that – there’s a simple biological metaphor. If the cost of a baby is extremely large (compared to the parent) you can’t produce many, so you gotta love and protect them a lot (hence mammalian parenting).

But entry level hardware is now buttons. We also had 500 developers at if.com in 25-odd teams. Many of those did jobs that would now be done by software libraries or SaaS products – we hand-rolled the best part of a 1,000 Soap endpoints (back when Soap was the wee boy). So the cost of a baby is now extremely low compared to the parent and we have a frogspawn ecology. The funders only care that some frogs get out of the pond and they get to kiss them to see if they turn into a princess – so most tadpoles die, so what.

Additionally, in the old days it was hard to become a VC and get the opportunity to lose a lot of other people’s money (most VC funds under perform the market) but now it is super-easy to become an angel and lose your own money, so the funding side of the game has gone crazy as well.

Interestingly because the industry has so many network effects that lead to competitive advantage we are starting to see insect-like cousin-group selection – start-ups that are loosely part of a bigger group who work together and where the members of unsuccessful start-ups can see second careers as early employees of the successful ones. The YCombinator alumni are probably the best known of these, but Dave McClure is building his own swarm and various other well known early investors are making sectoral investments.

My gut instinct is that this will formalise. There are other hit-based industries (think of 60’s pop groups out of the Brill Building, or Motown, or the way magazine publishers work). People work for the group for salary (and upside) and resources (editors, band members, backing singers) are relocated from flops to push hits. If I was running 500 Startups I would share risk and reward across my base and build a crop of ‘future CEO’s’ that I would rotate in junior roles through my teams to build experienced executives. As long as folk are heavily tied as ‘founders’ to their start-up that will be hard. Different capital structures would be needed.

Zed Shaw has identified another trend9 that I think will also grow over time. Many of the new crop of B2B start-ups are ‘departments of Web 1.0 startups turned into a SaaS’.

At if.com we had paid staff who did the work that is currently done by companies like:

Linode -> our data centre staff

Kiss Metrics -> our MIS teams

io: -> survey contracted market research people

Heroku -> our platform teams

Google Analytics -> our MIS teams

etc, etc

Essentially whole teams done gone and privatised themselves as RESTful services.

The beauty of selling starting-up SaaS to start-ups is that all of them will buy it – the successful and the unsuccessful ones. It is hard to think of a startup I know that won’t leave a coupla thousand dollars stuck to a YCombinator SaaS start-up or two. But starting a SaaS company now that offers a service which only mature companies can use? It would almost certainly die from lack of customers.

As SaaS grows over the years I think you will find the opportunities for SaaS companies will continue to keep growing. If I was stuck as the developer in a day job with the Department of Paperclips in Monster Corp I would be taking a hard look around me and thinking:

  • does this job exist in every company?
  • what would this job look like if it were running over a REST interface?
  • what would truly excellent customer service look like for the Department of Paperclips?
  • what are Monster Corp’s drivers in relation to this department?

The great problem with young founders is that they don’t have the experience to build products for people outside their circle of experience. The world of open source is full of text editors because the builder is also the customer. Likewise start-ups that build tools for start-ups are a no-brainer. Well a couple of years in a big corp now will provide you with critical Subject Matter Expertise for your SaaS start-up to come.

State Of Play In Lean

When the lean movement took off there was a distinct tendency to overplay lean technical techniques (agile, scrum, test-driven development and continuous deployment). This has mellowed down a bit, but the point is well made: lean technology is only useful in as much as it supports lean business processes.

Like all hot concepts there is a lifecycle of hype. When people realise it is useful they first start by re-categorising everything they all ready do as the flavour de jour. Then it blooms and blooms and starts being applied quite wildly.

An example of this would be the rise of paper prototypes as MVPs. Paper prototypes are a great and cheap way of working out design and interface issues but they need to be tempered with capability.

I could knock up a paper prototype of an iPhone teleportation app and get great customer stats (‘96% of 14-25 year olds would def-in-et-ely pay $1,000 for this app, right here, right now!’). But without capability it is just piss and wind.

Patrick Vlaskovits and Eric Ries would define an MVP loosely as ‘enough product to test a hypothesis’ – and would insist that if it isn’t testing a hypothesis, then it isn’t an MVP.

I am all in favour of testing hypothesis by the cheapest mechanisms possible, but my gut instinct is that you probably should not call it an MVP unless it is a product – a product that you are actually trying to sell, right here, right now. An MVP, after all, is a Minimum Viable Product. If your users are not your users (but your product) you should be thinking how do I get enough users to ‘sell’ to whoever your customers are (lead generation, advertisers, market information, whatever).

One of Usability Guru Jacob Neilson’s maxims10 is:
To design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior (sic).

allow me to introduce Guthrie’s Corollary:
Nothing cuts through the fog of customer intentions better than a palm crossed with silver.

I’m not a reductionist who believes that all human relations can be reduced to the bare cash nexus, or that the glories of human companionship in its myriad complexities is just about getting your bones jumped, but there is a delicate point at which these central topics must be addressed – and it seems to be that the MVP is right at that nitty-gritty.

One of the reasons I think we should insist on this MVP point is because of people’s behaviour towards income.

Tolstoy once said:
Happy families are all alike; all unhappy families are unhappy in their own way.

A quick look at some demand curves is instructive here:

customers_vs_$

It seems to me that:

the freemium ends of demand curves are all equally uninteresting; the premium ends are all differently fascinating.

I think start-ups should take a hard look at exactly what they are testing when they launch free or freemium products. How many times do we really need to test the proposition “do people like free stuff?” If you think that is still an interesting question, my Nigerian friend would like to sell you some ursine scat-porn.

If your business is ‘selling users’ then free is obviously great. But if it is not, then freemium is a marketing strategy that you should explore once you have product-market fit. When you understand your fulfilment costs, and your customer acquisition costs, and the drivers that motivate your customers, then you must consider exploring freemium11.

A fundamental point is that, for a particular organisation, lean technology involves Donald Rumsfeld’s useful category – the known-unknowns (people know how to do continuous deployment, it has been solved, we as a company simply lack the skills) whereas lean business development involves unknown-unknowns (will anyone ever buy this?).

The thing about known-unknowns is that they are (thankfully) reducible to a ‘cookbook’ approach – the problem with the unknown-unknowns is that they are all down to judgement, culture, organisational structure and other more opaque skills.

I suspect the glamour boom of lean is fast approaching an end and it is settling down into a mature discipline of the hard grind out. A good indicator that ‘the market in lean is at the top’ would be this discussion12 on Hacker News right now as I write. Mr Waxman boldly poses the (rhetorical) question “The Beatles, The Original Lean Start-up?” (question mark interpolated by me). The answer is inevitably “No”. Those of you with sharp hearing should be able to discern the gentle rustle that an single eyebrow makes when it is raised archly.

Practical MVP’s And Launching

There is a certain cargo cult approach to launching a company – launch early, launch often, grow up in public, fail fast, ABL (always be launching).

The phrase ‘cargo cult’ comes from the Second World War. American troops would arrive on some obscure island. They would clear the jungle, flatten the ground, put up a small tower, and lo “big silver birds would come and debouche good things”.

So after the war, the locals would clear some jungle, flatten the ground, make a small tower and wait for the silver birds and their precious cargo, which, hélas, never came.

In the absence of a theory of launching, people simply cargo culted – confusing “achieving what someone else had achieved” with “doing what someone else does”.

Successful start-ups launched early and iterated, therefore we should launch and iterate. Note the difference – the critical adverb early. When is it early enough?

A theory of launching is emerging: the components are falling into place.

We have purpose – the purpose of an early stage start-up is to find a repeatable business model.

We have an operational methodology – customer pull, launching is the only way to get the customer to pull product from us.

What has been missing is a concise statement of customer motivation to participate in this. Paul Graham has supplied a concise summary of this in his recent essay13:

Usually we advise start-ups to launch when they’ve built something with a quantum of utility—when they’ve built something sufficiently better than existing options that at least some users would say “I’m glad this appeared, because now I can finally do x.” If what you’ve built is a subset of existing technology at the same price, then users have no reason to try it, which means you don’t get to start the conversation with them. You need a quantum of utility to get a toehold.

Turns out that the theory of launching is simply reciprocal gifts: “I will give you some of my time if you can give me something that I can’t do at the moment”.

So you can launch an MVP when you have something you can sell which is better enough than the alternatives to make someone think “I will give that a try”.

Note the contradiction, nay healthy tension, between customer and user in that statement.

There’s a lovely quote from Goethe (it was one of Lenin’s favourites, but don’t let that put you off):
Grau, theurer Freund, ist alle Theorie,
Und grün des Lebens goldner Baum

It translates as:
Gray, dear friend, is all theory, and green is life’s golden tree.

In the subsequent posts we will pull away from gray theory and embrace, the prickles, thorns and branches of Yggdrasil.

Footnotes

8 on reviewing this I remembered that back in the mid-80s I had the privilege of having an account on the University of London Cray-XMP – the first computer in Blighty with 1Mb of RAM (core actually). My annual allocation of CPU time on this behemoth was a staggering 20 minutes. At hypernumbers our team Xmas dinner (for 4 employees and families) costs as much as our quarterly hardware bill. Hardware is free.

11 Another thing that is beginning to grate is the massive overuse of the word pivot. I will try and express the concept in the native language of Dave McClure (or Dave McFuckingClure to give him his proper title), that bard of the start-up world. If it’s “two points to windward, Master Helmsman, and trim the mains’l” then it ain’t a pivot. If it’s “hard to starboard, for we be on a lee shore. Strike thae stunsl’s ya scurvy dogs, or I’ll lash every man jack of you. All hands up the ratlines, make sail, make sail, for your life depends on it!” then it probably is.

13 I first read this quote in http://ycombinator.com/atyc.html but it turns out it was hiding in plain sight, looking for this reference I found an old post by Paul Graham on Hacker News http://news.ycombinator.com/item?id=542768

Ant Miller – Guerilla Hacker of BBC R&D Fame

2011 January 11
Ant Miller Launches a Rocket

Ant Miller (courtesy Herb Kim)

Ant Miller (courtesy Herb Kim)

The BBC has a long and enviable history of innovation. For close to a century, they have been at the forefront of the development of broadcast technology.

Ant Miller is a Senior Research Manager in the BBC R&D Department, and a Geek Dilettante in his spare time. Ant was a bookseller when I first came across him, working for Waterstones in Brighton, running the computing section during the web’s first boom. It was great to find a computing book buyer who knew what the books were about: we’d have coffee and natter about the world of technology, Man City (his team), Huddersfield Town (mine) and where Lego Mindstorms fitted into the grand scheme of things. Next time I came across him, I was helping my colleague Josette Garcia run the Make magazine stall at Hack Day. Ant had joined the BBC and was there in an official capacity, and it was lovely to see him thrive under their auspices. The following year he was back at Mashed, building and launching a rocket from Alexandra Palace’s exhalted grounds, one of the highlights of a highlit day. Since then, he has been behind the BBC’s significant presence at Maker Faire, among a million other things.

Ant embodies all that’s good about the BBC – imagination applied with a sense of fun and a commitment to quality, an openness intent on reaching out to people and organisations outwith the BBC, and an instinct for the common good. He blogs for work on the Research and Development subsite, and blogs for himself at Reithian:


What do you do for the BBC?
I work in the R&D dept, specifically inside our Technology Transfer Team. BBC R&D has about 160 engineers, scientists and other research staff based in labs in West and Central London and Manchester working on a wide range of projects across broadcast tech. We work on everything from the latest cameras to digital media management systems and advanced transmission technologies to next generation user interfaces, plus most of everything in between.
Our Technology Transfer Team is responsible for getting the technologies the engineers come up with out of the lab and into the studio, the outside broadcast or the living room – wherever they need to get to in order to deliver value to the BBC and our audiences. My role inside that team is to work on all the communications channels we use to get our ideas out to the wider world, and to get the information we need into the dept about the challenges and problems in technology that the rest of the BBC need us to solve.

Are there specific projects you are working on right now?
I’ve got a few internal and external events in the pipeline – events are a key part of the external communications role, and these can be anything from a small hackday to help a team develop a prototype, to a large international conference, where we might have multiple demonstrators on stands, papers in conference and, if we’re lucky, a couple of keynote speakers.
I’m also looking at the blog and website content – it’s an ongoing task to ensure we keep up a steady flow of interesting relevant content for our readers and the key people we want to reach. Part of that is making films for the blog and website, which is great for focussing our efforts, but is pretty time consuming too. We’re very lucky to work with a great producer who used to work on Tomorrow’s World and the BBC Micro project.

Why is R&D important to the BBC?
It’s how we stay relevant, or part of how we do it anyway. The public service broadcast infrastructure, whether it’s over the airwaves with ‘sticks on hills’ or over IP, is a massive system – in order to change it, to keep it relevant and to stay up with advances that come in other areas of media technology, the BBC needs to be almost precognitive in how it prepares for the future. It’s like an oil tanker, or more like a fleet of oil tankers, that has to see far ahead in order to turn in time and avoid nasty collisions. R&D is like a scout ahead of the fleet, but still part of it – we see further ahead, and sometimes we actually have to make the channel ourselves (hmm, that metaphor paid off ok, in the end!).

From the point of view of a technologist, what are the 5 most vital innovations to come out of the BBC during its long history?
Vital is a tough term, but the plethora of technical standards that BBC R&D has helped define and in many cases led over the years have to qualify. I’m thinking of things like Nicam stereo, MXF digital file formats and the DVB-T2 broadcasting standard that’s allowing us to deliver HD television.
It’s impossible to over-emphasise how critical open standards are to the broadcast industry. As I mentioned before the industry as a whole is big, and in the UK the BBC makes up a very large part of the industry. If you were to put all of that industry into wholly proprietary technical platforms, with no interoperability through standards, then a number of problems could arise. With standards-based technologies, the market can allow innovation and competition, keeping costs to a reasonable level. They allow confident investment in platforms, by both ourselves and in industrial partners. By taking these standards to the international level, which we always try to do, we allow the UK industry to reap the benefits of the global media technology market, and through the wonders of international standards-based broadcast, we can all enjoy the technological and artistic marvel that is The Eurovision Song Contest!
Any others? Well once upon a time, BBC R&D went a bit further and actually specified and designed bits of kit – microphones, mixing desks etc. Many of these are quite iconic- BBC Heritage maintain a pretty good collection of them. One product that has gone on to great success is the LS 3/5a speaker, a lovely little reference speaker designed for applications where a compact speaker was needed. Rogers have just produced the 50,000th pair, a very fine lacquered set, and they’ve very kindly presented them to us. We’re just deciding where best to show them off!

How has the BBC’s innovation benefited other media companies beside the BBC?
We do take very seriously our role of being the R&D lab not just for the BBC, but to an extent for the whole of the UK media industry. Our standards work is useful, essential even for companies across the country. We’ve always worked extensively in partnership with other companies, and with universities and research institutions, but more than ever the really big projects need partnerships to work.
Some examples are pretty clear – the work we did to develop the digital television standards allowed the market to focus on delivering on the products that the audience was going to need, so when the transmission of DVB came through, there were set-top boxes ready to go. This is actually happening again right now as the HD digital television service rolls out – in order to deliver HD pictures our engineers have had to develop fundamental transmission technologies, and the reference equipment to demonstrate and test these breakthroughs.
One of my colleagues has also likened us to being a bit like a teaching hospital – it’s a nice comparison. All across the BBC there are young people starting out in their media careers, learning the trade and building their experience. In R&D we do have a relatively large pool of trainee engineers and scientists – typically a dozen or more. Now we hope that our investment in these young researchers will pay off in their long-term contribution back into the department’s work, but some will go on to be technology leadership across the rest of the BBC, or to our partner technology companies, or other broadcasters, or elsewhere in the UK technology industry. And that’s absolutely fine, because we hope we treated them well, if not lavished riches upon them, and given them a passion for the technology, and real respect for the public service ethos, and that will in the best Reithian traditions, make us all better off.

It’s a long way from a R&D prototype to a industry-ready device or service. What has working in R&D taught you about industrial robustness?
It’s hard, and it’s something that you avoid thinking about early on at your peril. Much of the work we do in R&D is long-lead stuff, years out into the future, and we even do some pretty basic fundamental research, especially in the radio frequency domain, and increasingly in perceptual psychology type science. However, before long we usually start building stuff, and whether it’s software or hardware, there’s always a tension between code or kit that’s good to hack around with, to experiment on and to be really creative with, and building something that can eventually be developed into a stable mature platform.
In the hardware domain this kind of solid engineering practice is in our blood. When it comes to software though, and especially when we look at widely distributed systems and services that have to be extremely scalable in the final deployment, it’s clear that we can develop better processes and practices, and that’s a key development process that we’re going through right now.

It’s about a year since your department moved from Kingswood Warren to White City. Now the dust has settled, was it a good move? What are the upsides? What are the downsides?
It has been a massive project to relocate the department and the resources from one out-of-town location to two separate inner city facilities, and at the same time integrate a third existing lab into the department, but it’s pretty much done now for London, and in Manchester the eventual home of the team there is rapidly taking shape. No-one would pretend that the move didn’t impact on this year’s work, but we’re already feeling the benefit of being in White City nestled right in with the rest of the BBC. Kingswood was lovely to look at, and had its benefits, but we’re an essential part of how the BBC addresses a rapidly developing future, and it’s clear that a long distance relationship just wasn’t going to achieve that anymore. So, now we’re hosting events pretty much every week where colleagues from all sorts of technical and even editorial areas are meeting with our engineers and research scientists. We’ve managed to keep a significant element of independence too – R&D is meant to try things, and to occasionally break them too, so we have our own very high capacity network, our own IT support too. Centre House is slowly but surely becoming home, and we’re making our mark on W12 as a whole.
I think we’ll understand what the impact was this year when we don’t do it next year- not having three months dominated by parking projects, packing, sorting, relocating and commissioning new facilities should see us produce significantly more public results in 2011, but quite what the difference will be I don’t think any of us are really certain.

I was sorry to hear of the demise of BBC Backstage. Why did it shut down? What did it achieve in the 6 years it has been around? Any plans to keep the Backstage community together?
Backstage has run it’s course, and overall done what it set out to do in making Open Data, Linked Data, and open collaboration a key part of the way that business as usual gets done at the BBC. It can be argued that that is a journey still to be completed (personally I doubt it will ever end) but the idea with finishing up Backstage is that it took the crusade as far as it could. Now we have the idea planted and growing in the main business, and there we need to foster it and make it grow. So long as we run Backstage then it’s always going to be Backstage’s thing to do, and now it needs to be the thing that everyone does.
Of course Backstage was much more than that – the events, the community all made a great contribution to the BBC and the wider culture, and we hope that this engagement will continue. My role as external comms for R&D means I’m maintaining our engagement with the community, but I’m also working hard to try and build a bigger ‘metacommunity’ of developers and designers that pulls in more than the BBC. The potential of these communities seems to run to a power law of the resources available – if we can pull off this bigger thing (and I really want it to work) then the result could be pretty spectacular. It’s all to play for now, but a lot of us are really excited about what we might be able to launch in 2011.
You’ll still have the feeds and hopefully many more, all served out of a new site that’s in development.

How did you get involved in Maker Faire?
I have no idea. Honestly, no recollection at all. Glad I did though, it’s amazing.

The Surround Video you demonstrated at Maker Faire 2010 was stunning. Have you been able to develop it further? What future uses do you see for it?
Ooh, come to MF 2011, we’ll blow your mind! I’m not going to spoil it, but if anyone came along to the Making Connections festival of Radio in Belfast in September may have seen a little demo of part of this amazing thing that’s being built. We’ll have a full blown demo at the Centre for Life in March 2011, and then we’ll be able to announce it’s deployment, and that’s cool too. This is all a demonstrator of course, and seeing as we’re still working on it there’s an element of risk, but if it works, well, you’ll see.

Will you be attending Maker Faire in 2011, and, if so, what plans do you have for it?
Me, personally? No, I won’t, sadly. I’m running our participation in the Big Bang Science fair which clashes. I’ve done MF two years in a row, so we felt we really needed to help out with Big Bang this year. The team in the North Lab will be coming along though, and in addition to the ‘Surround Video plus’ demo they should have a typical range of the homebrew hackery and weirdly futuristic media technology that we hope is becoming our hallmark.

How did you get into technology?
Not sure really – I loved Lego since I can remember, and especially technical. As a kid I worked Saturdays in my Dad’s garage, doing MOTs, changing oil, doing brakes, eventually stripping down engines (he sold Lancias- lot’s of stripping down engines! Especially the flat four in the Gamma Coupe as that used to like to chew on it’s own valves). I’ve never really got deeply into coding though – just at a very shallow naïve level I can tinker with a script.
In my mid-twenties though, when I’d more or less given up on doing much technical and was running a law firm’s archives, some friends started an AI course at Sussex, and long summer evening chats when they came back home made me realise that there was a change coming, with technology about to surge into people’s lives with a potential impact far outstripping what we are generally equipped to deal with in our day-to-day lives.
So, I gave up the day job, signed onto a college course, and scraped my way into University by basically hassling the admissions tutor into letting me in. It was brilliant – hard work, and because I’d tried and failed a degree once already I got no grant, but perhaps that helped. I worked almost full-time, studied hard, hardly went out at all. The curriculum I treated as a scaffold, a launch pad to jump into a world of mindbending ideas.
I still can’t believe how indulgent the faculty of COGS at Sussex were – I was just a madly enthusiastic undergrad, barging into all sorts of tutorials, asking weird questions, getting weirder answers. Last week I had to throw out all my notes – sort of gutted, they’d gone moldy under the stairs, but I hadn’t looked at them for five years. That really shows the true value of that sort of course – it’s not what you learn, it’s how you learn. Learn to love ideas, and the people who have ideas, and to explore them with gusto and excitement.
Honestly, I don’t even know if I *am* into technology – I’m into ideas. There just happen to be a lot of interesting ideas about technology now!

How did you get involved with the BBC?
An ex-boss from my first job after university was working here, and called me in to do some casual work to help out the archives on some business change/ technology projects. It was pretty unexpected, but probably a good introduction to the structures and processes that make the BBC work, in the shadows, chugging away. It also exposed me to the massive challenges to getting an entity the size and shape of the BBC to take best advantage of tech – we are nobody’s ideal customer, nobody’s typical user, and the mismatch with expectations from vendors can be vast!

When we met you were a student in Brighton, funding yourself as the computing book buyer at Waterstone’s. What did this teach you about technology?
It’s a curious position to be in, looking after information about information – meta-information. You have to develop a sort of pattern recognition algorithm to understanding the phases and waves of technology propagation into the community. It’s really hard to track, but vital to do so as unsold computer books get out-of-date so quickly and are so bloody expensive! The worst for it I quickly learned were books on JavaScript or other client side web tech – it’s an area of tech where, with some notable exceptions, the new utterly overwhelms and sweeps away the old. At the same time a good book in this area sells extremely well as it’s an essential for professionals seeking to keep up-to-date.
Sometimes I think it might be nice to get back into this business, but the way the big chains have aggregated and centralised their functions seems to have taken the fun out of the game a bit, and the online retail competition makes the risk so much higher.
Don’t want to big you up to much, but I think O’Reilly contribute to the community and the industry more than many. It’s essential when bookshops are squeezed and homogenised, when the online retailers are paring to the bone and giving almost nothing back to the public, that publishers like yourself take the time and effort to make the conversations work. It’s heartening to see O’Reilly and others recognising the responsibility of the publishing trade to be the breeding ground for ideas.

Ten years ago, you told me that Lego Mindstorms would make a fantastic teaching tool to get kids into programming. Has Mindstorms been trumped by the Arduino these days in that respect? Could it still be used to teach kids to code?
I still think Mindstorms has a huge role to play in getting engagement in real-world computing into education, but it’s going to take more than tools.

There’s a cultural change needed to make the most of this – we need to get over this idea of the two cultures that has been so fundamental to the way the education system has worked in the UK for many years. Technology, practical knowledge, industriousness, these are not the preserve of an uncreative, utilitarian group within society – these are the marks of a true and full citizen, a member of society, a full member who makes that society better by ideas and things. Somehow the US and Germany, and I think even France have a greater understanding of the nobility of fabrication, true making, than we have. People like James Dyson and Tim Berners-Lee are somehow exotic, weird outliers in our culture. I’ve got masses of time for Christopher Hitchens and A C Grayling, but there’s no way I’d put a thinker and artist above a maker, an engineer, a designer or scientist. All of these people explore ideas and build a better world, and we have to recognise that PPE or Classics degree confers no higher cachet than an MEng or BSc.
Wow, big rant, anyway, yes Mindstorms is/are good, but something like Arduino or mbed is great too, an extension that is valuable because it’s less toy like, but still very accessible. It makes you realise this is a serious sensible way to engage with reality, it enforces the concept that ‘creative idea plus embodied rules set equals reality YOU CAN CHANGE’. That is the core idea of programming, and to an extent engineering and applied science (though the object there is to figure out what that rules set is!). The point is not that these tools teach programming, useful though that is. They do the most valuable thing it is possible for education to do, they teach you about your place in the world and your capability to change that place and that world.

Startup Weekend Brussels – 28th Jan 2010

2011 January 6
by Kris Buytaert

Startup Weekend BrusselsLater this month Startup Weekend Brussels will take place, obviously in Brussels.

Startup Weekend Brussels will cram the whole Startup LifeCycle into 54 hours. It’s a 2-day simulation of the entire process of creating and launching a Startup. You start with a raw idea, build it to something meaningful, develop a product concept, do lots of brainstorming, planning and coding, work like mad and have tons of fun.

It’s also your only chance to dive into the life of a Startup entrepreneur, to meet the people who live that life, and to make up your mind if it’s something for you.

They have a lot planned, starting off on Friday evening where everyone gets together, figures out who else is there and what would be interesting to build. Participants pitch their ideas to the crowd; teams start to form in order to start with a fresh focus on the product and basic prototype building  on Saturday morning. There will be a bunch of mentors around asking critical questions.

The big day however will be Sunday. At around 6pm, presentations to the Jury start.

During the weekend you’ll get food for body, thought and soul:

  • When: January 28, 6 pm till January 30, midnight.
  • Where: BetaGroup Coworking Space, ICAB; 4 Rue des Pères Blancs, 1040 Auderghem (Brussels).
  • Price: €75 for the entire event, including 7 meals, the drinks, the works…

Speed Coding 2011: Beyond Basic TDD

2010 December 13
by Skills Matter
Speed Coding 2011: Beyond Basic TDD

Speed Coding London 2011: just like speed dating... but with code... and better

Inspired by an openspace session at CITCON London on practices and techniques that take people beyond basic TDD, we have organised Speed Coding London 2011, comprising of four intensive hands-on coding workshops in 2011 to explore the advanced topics of test driven development and object design.

In the Speed Coding workshops, you’ll work in pairs, frequently switching who you’ll pair with, to explore different models, provide and benefit from pair coaching. Bring your own laptop and work in your favourite IDE/language. The speed coding exercises are designed by Gojko Adzic, who will facilitate the Speed Coding workshops.

Find out more here

getitmade – From Prototyping to Product

2010 December 13

getitmadeThere is a huge gap between a prototype and a market-ready product. Beside the many engineering issues, funding can be a breaking point for any project. getitmade.com sets out to help solve that problem by creating a pledge bank for future products that utilises social networks to build a critical mass of pre-orders, which allows the product to be sent into production.

I asked getitmade’s Nick Ager what they aim to achieve:
 

GMT: Who is involved in getitmade?
Nick Ager: getitmade was founded by Rob Dobson and Nick Ager – we are two entrepreneurs who became frustrated by the difficulty of taking our prototypes into production. Having both successfully founded previous companies we became convinced that there must be a simpler way of getting new product to market.

How did getitmade come about?
We both harbour a love for creating physical products and found that our experience building software companies could be channeled to help. We were discussing our product ideas and how best to find out if there was a market for them when we came up with the the idea for getitmade. It’s the website we wanted to exist for our own product ideas. We hope other product innovators will find the platform equally useful.

What problem does getitmade solve?
Market-testing and funding from prototype to production. Many great ideas stumble at this stage, though a good example of one that broke through is the ubiquitous Brompton folding bicycle. The inventor Andrew Richie was turned away by existing bicycle manufacturers who said there was no market for his folding bicycle. The banks were no more help and refused to provide funding. Richie overcame these obstacles by pre-selling bicycles to his friends and family. getitmade uses the same pre-selling model but harnesses the reach of online social networks such as Facebook and Twitter to help lots of other innovators fund taking their products from prototype into production. No money changes hands unless the product pre-sales reach production level and it goes into manufacturing.

Have you any experience of the UK VC world? What is your experience of the UK VC World?
Rob sold one of his previous businesses to a Private Equity company and is an angel investor in a number of other companies. getitmade is currently self-funded but we keep an open mind about funding alternatives in the future.

What was your initial itch that made you think getitmade would be useful to you?
Over a beer we discussed a number of great new product ideas (ok that’s intended to be a bit tongue in cheek!). Nick was keen to solve a problem he’d encountered while sailing the oceans in the previous few years – an errant auto-helm that often got him off track and sometimes into trouble. Rob’s big idea (aside from the dart that automatically finds double-top – or the bowling ball that automatically gets a strike every time) is a projection bedside clock that is really a computer and allows you to watch youtube and read books on the ceiling. For that one go to getitmade.com – Rob’s looking for help getting the idea into production but there is a really cool 3D design for how it could look!

Is getitmade just for inventors?
It’s for anyone who has a product idea and wants to test the market and raise the funding to realise their ideas. It is also for people who already have products in production and would like to aggregate demand and move towards batch production – to simplify their supply chain and reduce the reliance on stock.

Do you help find manufacturers?
Yes, we aim to help innovators find manufacturers that will be appropriate for the quantity of their product they plan to manufacturer.

What’s the difference between getitmade and eg kickstarter.com?
getitmade is focused on products and actual pre-sales of those products. Conversely kickstarter (for instance) uses a donation approach to fund projects that may result in something being produced. The key difference is that we aim to protect customers interests rather than relying purely on trust.

Have you found a UK/European bias to your sign-ups?
We are UK-based and, almost inevitably, have started to focus on the UK and Europe. Having said that we’re convinced there is opportunity worldwide and the site can be used by innovators from anywhere and already caters for multiple currencies & shipment destinations.

Will you be at Maker Faire this year?
Yes we’re very excited and hoping to be exhibiting at Maker Faire up in Newcastle 12-13 March. We look forward to meeting the maker community and promoting the products already on getitmade.

What is your competition about?
Our competition is to encourage the first few innovators to upload their products onto the site. There’s nothing to lose by putting product ideas onto the site and we hope to overcome any initial hesitancy by offering a few prizes. There’s a range of cash prizes and a MacBook Air to be won.

Functional Programming eXchange – London, March 18th 2011

2010 December 8
Functional Programming eXchange 2011 in London March 18th

Functional Programming eXchange 2011 in London March 18th

Functional Programming eXchange 2011 in London March 18th


Skills Matter is proud to announce the next Functional Programming eXchange, scheduled for March 18, 2011.

We’re working with Robert Pickering (@robertpi), the programme lead, to put together an inspiring programme, featuring talks by many leading experts!

Who wil be there?
The conference will feature talks by Simon Peyton-Jones (Haskell), Sadek Drobi, Miles Sabin (Scala), Matt Davey (F#), Simon Cousins (F#), David Pollak (Lift), Adam Granicz (WebSharper), Antonio Cisternino, Tomas Petricek and more…

Programme
Most of the talks and speakers are confirmed and you can find the full programme here:
http://skillsmatter.com/event/scala/functionalpx-2011/wd-1294

Follow #functionalpx on Twitter for programme updates.

Tickets
Registration is open and if you register on or before December 31st, they go for just £75(+VAT) each.. so if you think you’ll like to join us for a day packed with learning, get your skates on and register today!

Functional Programming eXchange Workshops
In the same week as the Functional Programming eXchange, on March 16-17th, Robert Pickering will give his Beginning F# Workshop, and attendees of this workshop will get a free ticket to the eXchange, so if you are keen to get up to speed with F# and Functional Programming, this is your chance!

Github – From Forking Code To Spock On The Decks

2010 December 7
Github-co-founder-Tom-Preston-Werner

I spoke to Github co-founder Tom Preston-Werner at Erlang Factory in June. We talked about why Github is different, what they were hoping to achieve and what have they learned since launch:

Interview with Github co-founder, Tom Preston-Werner from oreillygmt on Vimeo.

iMinds 2010, Ghent

2010 November 30

iMinds 2010The date: Thursday, December 16th
The place: the ICC Ghent
The event: iMinds 2010

As part of the Future Internet Conference week in Ghent, this year’s iMinds will once again bring together the brightest minds from the Academic, Corporate and Start-up worlds of Belgium and its surroundings.

With talks from people such as Peter Hirshberg, Ben Verwaayen, Neelie Kroes, the event promises to be as interesting as ever.

This year, iMinds is embedded within Future Internet Conference week, a full week of back-to-back conferences on topics such as IPv6, Living Labs etc …

iMinds is hosted by the IBBT, the Interdisciplinary Institute for Broadband Technology, which is an independent research institute founded by the Flemish government to stimulate ICT innovation. The IBBT focuses strongly on the link between ICT technology and its application domains, recognizing that software, communication technologies & networking, service platforms, etc. serve as crucial infrastructure for a number of industries such as health-care, mobility and media. This year, iMinds is co-hosted by Bell Labs, and is intent on exploring the current and future state of the internet now it has become one of the most critical utilities of the 21st century.

iMinds is open to anyone with an interest in the subject of ICT innovation and focuses on researchers, entrepreneurs from large and small companies, venture capitalists, civil servants, politicians and creative individuals with different disciplinary backgrounds.

More news on Twitter