Skip to content

All about Catalyst – interview of Matt S. Trout (Part 2 of 3)

2014 January 20

Does all that flexibility come at a price?

The key price is that while there are common ways to do things, you’re rarely going to find One True Way to solve any given problem. It’s more likely to be “here’s half a dozen perfectly reasonable ways, which one is best probably depends on what the rest of your code looks like”, plus while there’s generally not much integration specific code involved, everything else is a little more DIY than most frameworks seem to require.

I can put together a catalyst app that does something at least vaguely interesting in a couple hours, but doing the sort of 5 minute wow moment thing that intro screencasts and marketing copy seem to aim for just doesn’t happen, and often when people first approach catalyst they tend to get a bit overwhelmed by the various features and the way you can put them together.

There’s a reflex of “this is too much, I don’t need this!”. But then a fair percentage of them come back two or three years later, have another look and go “ah, I see why I want all these features now: I’d've written half as much code since I thought I didn’t need all Catalyst features”. Similarly the wow moment is usually three months or six months into a project, when you realise that adding features is still going quickly because the code’s naturally shaken out into a sensible structure

So, there’s quite a bit of learning, and it’s pretty easy for it to look like overkill if you haven’t already experienced the pain involved. It’s a lot like the use strict problem writ large – declaring variables with my inappropriate scopes rather than making it up as you go along is more thinking and more effort to begin with, so it’s not always easy to get across that it’s worth it until the prospective user has had blue daemons fly out of his nose a couple of times from mistakes a more structured approach would’ve avoided.

So, it’s flexibility at the expense of a steep learning curve, but apart from that, if I could compare Catalyst to Rails, I would say that Rails tries to be more like a shepherd guiding the herd the way it thinks is the right one or the way they should go, while Catalyst allows room to move and make your own decisions. Is that a valid interpretation ?

It seems to me that Rails is very much focused on having opinions, so there’s a single obvious answer for all the common cases. Where you choose not to use a chunk of the stack, whatever replaces it is similarly a different set of opinions, whereas Catalyst definitely focuses on apps that are going to end up large enough to have enough weird corners that you’re going to end up needing to take your own choices. So Rails is significantly better at making easy things as easy as possible but Catalyst seems to do better at making hard things reasonably natural if you’re willing to sit down and think about it.

I remember talking to a really smart Rails guy over beer at a conference (possibly in Italy) and the two things I remember the most were him saying “my customers’ business logic just isn’t that complicated and Rails makes it easy to get it out of the way so I can focus on the UI”, and when I talked about some of the complexities I was dealing with, his first response was, “wait, you had HOW many tables?”.

So while they share, at least very roughly, the same sort of view of MVC, they’re optimised very differently in terms of user affordances for developers working with them. It wasn’t so long back somebody I know who’s familiar with Perl and Ruby was talking to me about a new project. I ended up saying: “Build the proof of concept with rails, and then if the logic’s complicated enough to make you want to club people to death with a baby seal, point the DBIx::Class schema introspection tool at your database and switch to Catalyst at that point”.

But surely, like Rails, Catalyst offers functionality out of the box too. What tasks does Catalyst take care of for me and which ones require manual wiring?

There’s a huge ecosystem of plugins, extensions and so forth in both cases but there’s a stylistic difference involved. Let me talk about the database side of things a sec, because I’m less likely to get the Rails-side part completely wrong.

Every Rails tutorial I’ve ever seen begins “first, you write a migration script that creates your table” … and then once you’ve got the table, your class should just pick up the right attributes because of the columns in the database, and that’s … that’s how you start, unless you want to do something non-standard (which I’m sure plenty of people do, but it’s an active deviation from the default), whereas you start a catalyst app, and your first question is “do I even want to use a database here?”

Then, assuming you do, DBIx::Class is probably a default choice, but if you’ve got a stored procedure oriented database to interface to, you probably don’t need it, and for code that’s almost all insanely complex aggregates, objects really aren’t a huge win but let’s assume you’ve gone for DBIx::Class

Now you ask yourself “do I want Perl to own the database, or SQL?” In the former case you’ll write a bunch of DBIx::Class code representing your tables, and then tell it to create them in the database; in the latter you’ll create the tables yourself and then generate the DBIx::Class code from the database. There’s not exactly an opinion of which is best: generally, I find that if a single application owns the database then letting the DBIx::Class code generate the schema is ideal, but if you’ve got a database that already exists that a bunch of other apps talk to as well, you’re probably better having the schema managed some way between the teams for all those apps and generating the DBIx::Class code from a scratch database.

Both of those are pretty much first class approaches, and y’know, if your app owns the database but you’ve already got a way of versioning the schema that works, then I don’t see why I should stop you from doing that and generating the DBIC code anyway. So it’s not so much about whether manual wiring is required for a particular task or not but how much freedom you have to pick an approach to a task, and how many decisions does that freedom require before you know how to fire off the relevant code to set things up. I mean, whether you classify code as boilerplate or not depends on whether you ever foresee wanting to change it.

So when you first create a catalyst controller, you often end up with methods that participate in dispatch – have routing information attached to them – but are completely empty of code which tends to look a little bit odd, so you often get questions from newbies of “why do I need to have a method there when it doesn’t do anything?”, but then you look at this code again when you’re getting close to feature complete, and almost all of those methods have code in them now because that’s how the logic tends to fall out.

There’s two reasons why that’s actively a good thing: first, that because there was already a method there even if it was a no-op to begin with, the fact it’s a method is a big sign saying “it’s totally OK to put code here if it makes sense”, which is a nice reminder, and makes it quite natural to structure your code in a way that flow nicely and secondly, once you figure it out in total, any other approach would involve time to declare non-method route plus time to redeclare all routes that got logic as methods and if most of your methods end up with code in them then that means that overall, for reasonably complex stuff, the Catalyst style ends up being less typing than anything else would be. But again we’re consciously paying a little bit more in terms of upfront effort as you’re starting to enable maintainability down the road

It’s easy to forget that Catalyst is not just a way of building sites, but also a big, big project in software architecture/engineering terms, built with best practices in mind.

Well, it is and it isn’t: there’s quite a lot of code in there that’s actually there to support not best practices, but not forcing people to rewrite code until they’re adding features to it, since if you’ve got six years’ worth of development and a couple hundred tables’ worth of business model, “surprise! we deleted a bunch of features you were using!” isn’t that useful, even when those features only existed because our design sucks (in hindsight, at least).

I’d say, yeah, that things like Chained dispatch and the adaptor model and the support for roles and traits pretty much everywhere enables best practices as we currently understand them. But there’s also a strong commitment to only making backwards incompatible changes when we really have to, because the more of those we make, the less likely people are to upgrade to a version of Catalyst that makes it easy to write new code in a way that sucks less (or, at least, differently).

But there’s a strong sense in the ecosystem and in the way the community tends to do things of trying to make it possible to do things as elegantly as possible even with a definition of elegant that evolves over time. So you might wish that your code from, say, 2008, looked a lot more like the code you’re writing in 2013, but they can coexist just fine until there’s a big features push on the code from 2008 and then you refactor and modernise as you and we’ve always had a bias towards modernization, so things can be done as extensions, and prioritising making it more possible to do more things as extensions, than adding things into the core.

So, for example, metacpan.org is a catalyst app using elasticsearch as a backend and people are using assorted other non-relational things and getting on just fine … and back in 2006, the usual ORM switched from Class::DBI to DBIx::Class and it wasn’t a big deal (though DBIx::Class got featureful enough that people’s attempts to obsolete it have probably resulted in more psychiatric holds than they have CPAN releases) and a while back we swapped out our own Catalyst::Engine:: system for Plack code implementing the PSGI spec, and that wasn’t horribly painful and opened up a whole extra ecosystem (system for handling HTTP environment abstraction).

Even in companies conservative enough to be still running 5.8.x Perl, most of the time you still tend to find that they’ve updated the ecosystem to reasonably recent versions, so they’re sharing the same associated toolkits as the new build code in Greenfield projects. So we try and avoid ending up too out of date without breaking existing production code gratuitously, and nudge people towards more modern patterns of use and not interfere with people who love the bleeding edge, but not force that on the users we have who don’t either. So sometimes things take longer to land than people might like. There’s a lot of stuff to understand, but if you’re thinking in terms of core business technology rather than hacking something out that you’ll rewrite entirely in a year when Google buys you or whatever, I think it’s a pretty reasonable set of trade-offs.

In the forthcoming third and last part of the interview, we talk about the the other Perl frameworks Dancer and Mojolicious, in which direction is the project moving, and whether Perl 6 is a viable option for Web development

 

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He is interested in anything related to the RDBMS and loves crafting complex SQL queries for generating reports. As a journalist, he writes articles, conducts interviews and reviews technical IT books

All about Catalyst – interview of Matt S. Trout (Part 1 of 3)

2014 January 14
by Nikos Vaggalis

CatBot-goggles-v-02-shadowcat-200We talk to Matt S. Trout, technical team leader at consulting firm Shadowcat Systems Limited, creator of the DBIx::Class ORM and of many other CPAN modules, and of course co-maintainer of the Catalyst web framework. These are some of his activities, but for this interview we are interested in Matt’s work with Catalyst.

Our discussion turned out not to be just about Catalyst though. While discussing the virtues of the framework, we learned, in Matt’s own colourful language, what makes other popular web frameworks tick, managed to bring the consultant out of him who shared invaluable thoughts on architecting software as well as on the possibility of Perl 6 someday replacing Perl 5 for web development.

We concluded that there’s no framework that wins by knockout, but that the game’s winner will be decided on points, points given by the final judge, your needs.

So, Matt, let’s start with the basics. Catalyst is a MVC framework. What is the MVC pattern and how does Catalyst implement it?matt-screen-01-200px

The fun part about MVC is that if you go through a dozen pages about it on Google you’ll end up with at least ten different definitions. The two that are probably most worthwhile paying attention to are the original and the Rails definitions.

The original concept of MVC came out of the Xerox PARC work and was invented for Smalltalk GUIs. It posits a model which is basically data that you’re live-manipulating, a view which is responsible for rendering that, and a controller which accepts user actions.

The key thing about it was that the view knew about the model, but nothing else. The controller knew about the model and the view, while the model was treated like a mushroom – kept in the dark; the view/controller classes handled changes to the model by using the observer pattern, so an event got fired when they changed (you’ll find that angular.js, for example, works on pretty much this basis – it’s very much a direct-UI-side pattern).

Now, what Rails calls MVC (and, pretty much, Catalyst also does) is a sort of attempt to squash that into the server side at which point your view is basically the sum of the templating system you’re using plus the browser’s rendering engine, and your controller is the sum of the browser’s dispatch of links and forms and the code that handles that server side. So, server side, you end up with the controller being the receiver for the HTTP request, which picks some model data and puts it in … usually some sort of unstructured bag. In Catalyst we have a hash attached to the request context called the stash. In Rails they use the controller’s instance attributes and then you hand that unstructured bag of model objects off to a template, which then renders it – this is your server-side view.

So, the request cycle for a traditional HTML rendering Catalyst app is:

  1. the request comes in
  2. the appropriate controller is selected
  3. Catalyst calls that the controller code, performs any required alterations to the model
  4. then tells the view to render a template name with a set of data

The fun part, of course, is that for things like REST APIs you tend to think in terms of serialize/deserialize rather than event->GUI change, so at that point the controller basically becomes “the request handler” and the view part becomes pretty much vestigial, because the work to translate that data into something to display to the user is done elsewhere, usually client side JavaScript (well, assuming the client is a user facing app at all, anyway).

So, in practice, a lot of stuff isn’t exactly MVC … but there’ve been so many variants and reinterpretations of the pattern over the years that above all it means to “keep the interaction flow, the business logic, and the display separate … somehow” which is clearly a good thing, and idiomatic catalyst code tends to do so. The usual rule of thumb is “if this logic could make sense in a different UI (e.g. a CLI script or a cron job), then it probably belongs inside the domain code that your web app regards as its model”; plus “keep the templates simple, and keep their interaction with the model read-only anything clever or mutating probably belongs in the controller”.

So you basically drive to push anything non-cosmetic out of the view, and then anything non-current-UI-specific out of the controller and the end result is at least approximately MVC for some of the definitions and ends up being decently maintainable

Can you swap template engines for the view as well as, at the backend, swap DBMS’s for the model?

Access to the models and views is built atop a fairly simple IOC system – inversion of control – so basically Catalyst loads and makes available whatever models and views are provided, and then the controller will ask Catalyst for the objects it needs. So the key thing is that a single view is responsible for a view onto the application; the templating engine is an implementation detail, in effect, and there are a bunch of view base classes that mean you don’t have to worry about that, but if you had an app with a main UI and an admin UI, you might decide to keep both those UIs within the same Catalyst application and have two views that use the same templating system but a completely different set of templates/HTML style/etc.

In terms of models, if you need support from your web framework to swap database backends, you’re doing something horribly wrong. The idea is that your domain model code is just something that exposes methods that the rest of the code uses – normally it doesn’t even live in the Catalyst model/ directory. In there are adapter classes that basically bolt external code into your application.

Because the domain code shouldn’t be web-specific in the first place you have some slightly more specialised adapters – notably Catalyst::model::DBIC::Schema which makes it easier to do a bunch of clever things involving DBIx::Class – but the DBIx::Class code, which is what talks to your database for you, is outside the scope of the Catalyst app itself.

The web application should be designed as an interface to the domain model which not only makes things a lot cleaner, but means that you can test your domain model code without needing to involve Catalyst at all. Running a full HTTP request cycle just to see if a web-independent calculation is implemented correctly is a waste of time, money and perfectly good electricity!

So, Catalyst isn’t so much DBMS-independent as domain-implementation-agnostic. There are catalyst apps that don’t even have a database, that manage, for example, LDAP trees, or serve files from disk (e.g. the app for our advent calendar). The model/view instantiation stuff is useful, but the crucial advantage is cultural. It’s not so much about explicitly building for pluggability as refusing to impose requirements on the domain code, at which point you don’t actually need to implement anything specific to be able to plug in pretty much whatever code is most appropriate. Sometimes opinion is really useful. Opinion about somebody else’s business logic, on the other hand, should in my experience be left to the domain experts rather than the web architect.

Catalyst also has a RESTfull interface. How is the URI mapped to an action?

The URI mapping works the same way it always does. Basically, you have methods that are each responsible for a chunk of the URL, so for a URL like /domain/example.com/user/bob you’d have basically a method per path element: the first one sets up any domain-generic stuff and the base collection, the second pulls the domain object out of the collection, then you go from there to a collection of users for that domain and pull the specific user. Catalyst’s chained dispatch system is basically entirely oriented around the URL space, drilling down through representations/entities anyway which is a key thing to do to achieve REST, but basically a good idea in terms of URL design anyway plus, because the core stuff is all about path matching, it becomes pretty natural to handle HTTP methods last. So there’s Catalyst::Action::REST and the core method matching stuff that makes it cleaner to do that, but basically they both just save you writing:

if (<GET request>) { ... } elsif (<POST request>) { ... } etc.

Of course you can do RESTful straight HTTP+HTML UIs, although personally I’ve found that style a little contrived in places. For APIs, though, the approach really shines but basically API code is – well, it’s going to be using a serializer/deserializer pair (usually JSON these days) instead of form parsing and a view – but apart from that, the writing of the logic stuff isn’t hugely different. RESTful isn’t really about a specific interface, it’s about how you use the capabilities available. But the URI mapping and request dispatch cycle is a very rich set of capabilities – and allow a bunch of places to fiddle with dispatch during the matching process. Catalyst::Action::REST basically hijacks the part where Catalyst would normally call a method and calls a method based on the HTTP method instead; so, say, instead of user you’d have user_GET called. There’s also Catalyst::Controller::DBIC::API which can provide a fairly complete REST-style JSON API onto your DBIx::Class object graph.

So again it’s not so much that we have specific support for something, but that the features provide mechanism and then the policy/patterns you implement using those are enabled rather than dictated by the tools. I think the point I’m trying to make is that REST is about methods as verbs and about entities as first class things so it implies good URL design … but you can do good URL design and not do the rest of REST, it’s just that they caught on about the same time.

What about plugging CPAN modules in? I understand that this is another showcase of Catalyst’s extensibility. Can any module be used, or must they adhere to a public interface of some sort?

There’s very little interface; for most classes, either your own or from CPAN, Catalyst::model::Adaptor can do the trick. There are three versions of that, which are:

  • call new once, during startup, and hang onto the object
  • call new no more than once per request, keeping the same object for the rest of the request once it’s been asked for
  • call new every time somebody asks for the object

The first one is probably most common, but it’s often nice to use the second approach so that your model can have a first class understanding of, for example, which user is currently logged in, if any, so that manages the lifecycle for you. Anything with a new method is going to work, which means any and all object-oriented stuff written according to convention in at least the past 10 years or so.

For anything else you break out Moose/Moo, and write yourself a quick normal class that wraps whatever crazy thing you’re using and now you’re back to it being easy (and you’ll probably find that class is more pleasant to use in all your code, anyway). Really, any attempt at automating the remaining cases would probably be more code to configure for whatever use-case you have than to just write the code to do it.

For example, sometimes you want a component that’s instantiated once, but then specialises itself as requested. A useful example would be “I want to keep the database connection persistent, but still have a concept of a current user to use to enforce restrictions on queries. So there’s a role called Catalyst::Component::InstancePerContext that provides that – instead of using the Adaptor’s per-request version, you use a normal adaptor, and use that role in the class it constructs and then that object will get a method called on it once per request, which can return the final model object to be used by the controller code. I’ve probably expended more characters describing it than any given implementation takes, because it’s really just the implementation of a couple of short methods and besides that, the most common case for that is DBIx::Class. There’s also a PerRequestSchema role shipped with Catalyst::model::DBIC::Schema (which is basically a DBIx::Class-specialised adaptor, remember) that reduces it to something like:

        sub _build_per_request_schema {
my ($self, $c) = @_;
$self->schema->restrict_with_object($c->user);
}

… but again the goal isn’t so much to have lots of full-featured integration code, but to minimise the need to write integration code in the first place.

In the forthcoming second part of the interview, we talk about the flexibility of Catalyst, its learning curve, Ruby on Rails, and the framework in Software Enginnering terms

 

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He is interested in anything related to the RDBMS and loves crafting complex SQL queries for generating reports. As a journalist, he writes articles, conducts interviews and reviews technical IT books

Git Reset

2014 January 6
by Lorna Jane Mitchell

Git-ing Out Of Trouble

Git is a popular and powerful tool for managing source code, documentation, and really anything else made of text that you’d like to keep track of.  With great power comes quite a lot of complexity however, and it can be easy to get into a tangle using this tool.  With that in mind, I thought I’d share some tips for how to “undo” with git.

The closest thing to an undo command in git is git reset.  You can undo various levels of thing, right up to throwing away changes that are already in your history.  Let’s take a look at some examples, in order of severity.

 

Git Reset

Git reset without any additional arguments (technically it defaults to git reset –mixed) will simply unstage your changes.  The changes will still be there, the files won’t change, but the changes you had already staged for your next commit will no longer be staged.  Instead, you’ll see locally modified files.  This is useful if you realise that you need to commit only part of the local modifications; git reset lets you unstage everything without losing changes, and then stage the ones you want.

 

Git Reset –Hard

Using the –hard switch is more destructive.  This will discard all changes since the last commit, regardless of whether they were staged or not.  It’s relatively difficult to lose work in git, but this is an excellent way of achieving just that!  It’s very useful though when you realise you’ve gone off on a tangent or find yourself in a dead end, as git reset –hard will just put you back to where you were when you last committed.  Personally I find it helpful to commit before going for lunch, as immediately afterwards seems to be my peak time for tangents and I can then easily rescue myself.

 

Git Reset –Hard [Revision]

Using –hard with a specific SHA1 will throw away everything in your working copy including staging area, and the commits since the one you name here.  This is great if you’re regretting something, or have committed to the wrong branch (make a new branch from here, then use this technique on the existing branch to remove your accidental commits.  I do this one a lot too).  Use with caution though, if you have already pushed your branch to somewhere else, your next push will need to use the -f switch to force the push – and if anyone has pulled your changes, they are very likely to have problems so this isn’t a recommended approach for already-pushed changes.

Hopefully there are some tips there that will help you to get out of trouble in the unlikely event that you need them.  When things go wrong, stay calm and remember that it happens to the best of us!

Lorna's blog imageLorna Jane Mitchell is a web development consultant and trainer from Leeds in the UK, specialising in open source technologies, data-related problems, and APIs. She is also an open source project lead, regular conference speaker, prolific blogger, and author of PHP Web Services, published by O’Reilly.

 

Do you want to know more about Git?

Lorna will be giving a full day tutorial – ‘Git for Development Teams’ on Thursday 6th February 2014 in London. This tutorial is organized by FLOSS UK and O’Reilly UK Ltd. Click here for further details. Please note that the early bird rates are available until January 14th.

Season’s Greetings

2013 December 23
by Josette Garcia

Best Wishes and Happy Holiday to you!

Erlang: The Written History

2013 December 21

downloadErlang is now over 25 year old. I’ve been involved with Erlang from the very start, and seen it grow from an idea into a fully-fledged programming language with a large number of users.

I wrote the first Erlang compiler, taught the first Erlang course, and with my colleagues wrote the first Erlang book. I started one of the first successful Erlang companies and have been involved with all stages of the development of the language and its applications.

In 2007 I wrote Programming Erlang (Pragmatic Bookshelf) – it had been 14 years since the publication of Concurrent Programming in Erlang (Prentice Hall, 1993) – and our users were crying out for a new book. So in 2007 I grit my teeth and started writing. I had the good fortune to have Dave Thomas as my editor and he taught me a lot about writing. The first edition was pretty ambitious, I wanted to describe every part of the language and the major libraries, with example code and show real-world examples that actually did things. So the book contained runnable code for things like a SHOUTcast server so you could stream music to devices and a full text indexing system.

The first edition of Programming Erlang spurred a flurry of activity – the book sold well. It was published through the Pragmatic Press Beta publishing process. The beta publishing process is great – authors get immediate feedback from their readers. The readers can download a PDF of the unfinished book and start reading and commenting on the text. Since the book is deliberately unfinished they can influence the remainder of the book. Books are published as betas when they are about 70% complete.

On day one over 800 people bought the book, and on day two there were about a thousand comments in the errata page of the book. How could there be so many errors? My five hundred page book seemed to have about 4 comments per page. This came as a total shock. Dave and I slaved away, fixing the errata.  If I’d known I’d have taken a two week holiday when the book went live.

A couple of months after the initial PDF version of the book, the final version was ready and we started shipping the paper version.

Then a strange thing happened – The Pragmatic Bookshelf (known as the Prags) had published an Erlang book and word on the street was that it was selling well. In no time at all I began hearing rumours, O’Reilly was on the prowl looking for authors – many of my friends were contacted to see if they were interested in writing Erlang books.

This is really weird, when you want to write a book you can’t find a publisher. But when an established publisher wants to publish a book on a particular topic it can’t find authors.

Here’s the time line since 2007

* 2007 – Programming Erlang – Armstrong – (Pragmatic Bookshelf)

* 2009 – ERLANG Programming – Cesarini and Thompson – (O’Reilly)

* 2010 – Erlang and OTP in Action - Logan, Merritt and Carlsson – (Manning)

* 2013 – Learn You Some Erlang for Great Good - Hebert – (No Starch Press)

* 2013 – Introducing Erlang: Getting Started in Functional Programming – St. Laurent (O’Reilly)

* 2013 – Programming Erlang - 2nd edition- Armstrong – (Pragmatic Bookshelf)

Erlang was getting some love so languages like Haskell needed to compete – Real World Haskell by Bryan O’Sullivan, John Goerzen and Don Stewart was published in 2008.  This was followed by Learn You a Haskell for Great Good by Miran Lipovaca (2011).

My Erlang book seemed to break the ice. O’Reilly followed with Erlang Programming and Real World Haskell – which inspired No Starch Press and Learn You a Haskell for Great Good which inspired Learn You Some Erlang for Great Good and the wheel started to spin.

Fast Forward to 2013

I was contacted by the Prags: did my book want a refresh? What’s a refresh?  The 2007 book was getting a little dated. Core Erlang had changed a bit, but the libraries had changed a lot, and the user base had grown.  But also, and significantly for the 2nd edition, there were now four other books on the market.

My goals in the 1st edition had been describe everything and document everything that is undocumented. I wanted a book that was complete in its coverage and I wanted a book for beginners as well as advanced users.

Now of course this is impossible. A book for beginners will have a lot of explanations that the advanced user will not want to read. Text for advanced users will be difficult for beginners to understand, or worse, impossible to understand.

When I started work on the 2nd edition I thought, “All I’ll have to do is piff up the examples and make sure everything works properly.” I planned to drop some rather boring appendices, drop the odd chapter and add a new chapter on the type system… so I thought.

Well, it didn’t turn out like that. My editor, the ever helpful Susanna Pfalzer, probably knew that, but wasn’t letting on.

In the event I wrote 7 new chapters, dropped some rather boring appendices and dropped some old chapters.

The biggest difference in the 2nd edition was redefining the target audience. Remember I said that the first edition was intended for advanced and beginning users? Well, now there were four competing books on the market. Fred Hebert’s book was doing a great job for the first-time users, with beautifully hand-drawn cartoons to boot. Francesco and Simon’s book was doing a great job with OTP, so now I could refocus the book and concentrate on a particular band of users.

But who? In writing the 2nd edition we spent a lot of time focusing on our target audience. When the first seven chapters were ready we sent the chapters to 14 reviewers. There were 3 super advanced users – the guys who know everything about Erlang. What they don’t know about Erlang could be engraved on the back of a tea-leaf. We also took four total beginners – guys who know how to program in Java but knew no Erlang – and the rest were middling guys: they’d been hacking Erlang for a year or so but were still learning. We threw the book at these guys to see what would happen.

Surprise number one: some of the true beginners didn’t understand what I’d written – some of the ideas were just “too strange”. I was amazed – goodness gracious me, when you’ve lived, breathed, dreamt and slept Erlang for twenty-five years and you know Erlang’s aunty and grandmother, you take things for granted.

So I threw away the text that these guys didn’t understand and started again.  One of my reviewers (a complete beginner) was having problems, – I redid the text, they read it again, they still didn’t understand – “What are these guys, idiots or something? I’m busting a gut explaining everything and they still don’t understand!” And so I threw the text away (again) re-wrote it and sent them the third draft.

Bingo! They understood! Happy days are here again! Sometimes new ideas are just “too strange” to grasp. But by now I was getting a feeling for how much explanation I had to add: it was about 30% more than I thought, but what the heck, if you’ve written a book you don’t want the people who’ve bought the damn thing to not read it because it’s too difficult.

I also had Bruce Tate advising me – Bruce wrote Learn 600 Languages in 10 Minutes Flat (officially known as Seven Languages in Seven Weeks).  Bruce is a great guy who does a mean Texas accent if you feed him beer and ask nicely. Bruce can teach any programming language to anybody in ten seconds flat, so he’s a great guy to have reviewing your books.

What about the advanced guys? My book was 30% longer and was aimed at converting Java programmers who have seen the light, who wish to renounce their evil ways and convert to the joys of Erlang programming, but what about the advanced guys?

Screw the advanced guys – they wouldn’t even buy the book because they know it all anyway. So I killed my babies and threw out a lot of advanced material that nobody ever reads. My goal is to put the omitted advanced material on a website.

I also got a great tip from Francesco Cesarini: “They like exercises.” So I added exercises at the end of virtually every chapter.

So now there is no excuse for not holding an Erlang programming course: there are exercises at the end of every chapter!

catJoe Armstrong, author of Programming Erlang, is one of the creators of Erlang. He has a Ph.D. in computer science from the Royal Institute of Technology in Stockholm, Sweden and is an expert on the construction of fault-tolerant systems. He has worked in industry, as an entrepreneur, and as a researcher for more than 35 years.

 

 

 

 

 

 

How Lean Saves You Money

2013 December 11

Dave Fletcher, founder and MD of web and apps developer White October believes that by using tried and tested Lean methods in product development, you can make your idea work better while spending less of your budget. Here he explains how it works for his customers.

White October

You Have a Great Idea

If you’re like most of our customers then you’ll have a great idea that you really believe in.  Your idea could be about how to solve a problem in your business or sector.  Or it could be about how to take advantage of an opportunity you’ve noticed.  Either way your idea will be a good one and you’ll need help to make it happen.

However, at some point in the past you’ve probably had a bad experience of developing good ideas into products.  Maybe your product didn’t develop as smoothly as intended. It may have missed big deadlines, gone over budget or delivered ineffective results.  You’re scared of this happening again and how much your idea might cost.  Your budget is finite and your company may even see software development as expensive and risky.

Whatever your past experiences, taking a Lean approach helps you drastically cut the risks and deliver a better product, for less money.

Why Lean is the Solution

“Lean: a method for developing products that shortens their development cycles by adopting a combination of business-hypothesis-driven experimentation, iterative product releases, and ‘validated learning.” - Wikipedia

Lean is a blend of techniques originally developed to bring pioneering high-tech products effectively to market.  These techniques are now being adopted across the globe by individuals, teams and companies looking to introduce new products or services into the market.  Because of their proven success we use these techniques across all our projects.

By deploying Lean techniques, we treat your idea with care and belief, while rigorously testing out its viability by identifying, challenging and then validating your assumptions before making significant decisions.  This approach minimises risks and conserves your budget, ensuring every penny is spent as efficiently as possible.

The result?  You get more time and money to spend on the best possible version of your idea.

Testing Assumptions Avoids Waste

We all make assumptions.  Its part of life, especially when we’re at the early stages of an idea.  At this stage we usually make assumptions about who will use the eventual product and how.

At White October we help you test these assumptions.  We help you make good decisions based on what you know while testing or postponing what you don’t.  This drastically reduces the chances of making an incorrect assumption, setting off in the wrong direction and wasting money on ideas or features that aren’t going to work.

If this story sounds familiar it’s because everyone has done it, even (and especially) the most successful companies.

Lean Conserves Your Budget for the Most Valuable Features

“Lean methodology favours simple experimentation over elaborate planning, customer feedback over intuition, and iterative design over traditional “big design up front” development.” - Harvard Business Review, May 2013

Our approach to Lean identifies assumptions in your thinking, creates simple experiments to test them, identifies the risks and points you in the direction of solutions.  In short it gets the proof you need of the most cost effective direction to take your idea.

Once you’ve made your decision we develop the simplest but most value-filled working version of your product possible.  As we do this we keep testing everyone’s assumptions, building in small iterations, measuring their effectiveness and learning from the results.  We then help you release your product and get fast, quantitative feedback from its users.

By this point you will have:

  • A working version of your product that already delivers real value to its users.

  • Specific, informed feedback on the most valuable aspects of your product.

  • A significant wedge of your budget left to spend on developing your product’s most valuable features even further.

So if you have an idea which you believe in but are concerned it will cost too much to bring to market, give me a call as chances are it won’t be as costly as you think.

 

Dave Fletcher 3Dave Fletcher is founder and MD of White October.   Over the past 10 years he has grown the agency to 25 full time staff.

Dave completed a Mathematics degree from Nottingham University before training on the job as an analyst programmer with RM Plc in Oxfordshire.  With a passion for the web and its potential, Dave has kept at the forefront of technology.  He and White October have gravitated towards projects that pose a technical challenge – his creative drive, technical grounding, and commercial awareness make him ideally placed to advise clients on product development strategy.

LinkedIn: uk.linkedin.com/in/davewhiteoctober/

Twitter: https://twitter.com/whiteoctober

Skype: dafletcher

Phone +44 (0) 1865 920707

Big data, climate change and developing economies: predictive modeling for improved lives

2013 November 27
by Dai Clegg

I did my first, and possibly last, Ignite talk at Strata London this week. If you don’t know the format it goes like this: 5 minutes, 20 slides; the slides automatically advance every 15 seconds and you have to tell your story and finish on time.

The story I was telling wasn’t about Acunu, nor even about low latency / real time analytics, which is a topic I’m often to be heard rabbiting on about. Low latency operational intelligence is what Acunu is all about. We build a platform for people who need to know what’s going on in their business right now. Pretty much every industry can benefit from tracking some KPIs minute-by-minute. And you can’t do that on day-old data in a data warehouse or Hadoop cluster, no matter how fast your BI tool can run queries.

Evidence for Development logo

But this time I was talking about a project dreamed up by some friends at Evidence for Development (EfD), a small charity I work with. Their idea is to build an economic model of the whole of rural southern Africa, including data about how people get access to food, market prices and meteorological data, then to build it up over a number of years until you have a database capable of predictive modeling of the effects of climate change in the region. The people at EfD have a ton of experience in building models of rural economies, which are widely, but not consistently, used in the region to help direct better aid projects and provide governments with early warning of disruptions (e.g. impact of crop failure).

There’s lots of support for the idea, but it’s going to be an interesting project to try to pull together all the strands. We’ve already made a start with getting the methodology onto the post-grad curriculum in economics departments in universities in the region, and starting to assemble open source developers to implement and support the software.

To see my talk you might have to refresh the page.

 

 

It was a challenge taking on the Ignite, lightning-talk format, but it was for a good cause and pretty enjoyable over all – apart for stumbling a few times as I raced to get through the story and keep in sync with the slides. I’d thoroughly recommend it to anyone who fancies themselves as a competent public speaker. I learned a bit that I hope will make me better in all formats and I am in awe of my fellow presenters.

dai cleggDai is a multi-decade veteran of the software industry, at some of the giants (Oracle & IBM) and some of the not-so-giant (Netezza and Acunu). He has worked in the trenches as an implementor, in the ivory tower as a text-book author and in a suit as a marketer. He loves learning about new technologies and how they can be usefully exploited – especially in ways that might make the world a better place.

Another episode of PHP in Sweetlake

2013 November 19

global_274716442

Last Friday was another episode of PHP in Sweetlake. We had a perfectly good plan, and a perfectly good schedule. So obviously that is *not* how things went down…

Due to excessive traffic between Rotterdam and Den Haag, our speaker Harry Verveer ran late. So as a time-filler, we watched a video on privacy by Georg Greve (board member of the Free Software Foundation Europe and CEO at Kolab Systems AG). The video was prepared as a keynote for the government “AlertOnline” campaign that Hans de Raad and Wouter Parent have been involved in. I felt there was *a lot* of food for thought in the video. You can read what I picked up from Georg’s talk at the bottom of this write-up.

Harry Verveer gave his talk on the lost art of UML. While he himself marked most of the types of UML diagrams as obsolete (painful reminders of being forced to draw diagrams in school), there are three types of diagrams still used in his workflow:

  • The Entity Relationship Diagram or ERD: used to draw entity (database) relationships. This is still a very good way to come to grips with what objects and properties you are going to need, without diving into actual code.
  • The Class Diagram: get a feel for what behaviour your classes will need to have.
  • Sequence Diagrams:  explain concepts to non-programmers (like customers). All I can say about this is I want his customers! Most of my customers consider Sequence Diagrams to be abstract art…

Two basic benefits of using UML kept returning in the talk: drawing diagrams forces you into a birds-eye view. And: it’s cheap to refactor on paper. These are two pretty compelling reasons so I’d say if you don’t know these diagrams, learn a bit more about them. They might yet serve a purpose.

We finished with a joint discussion where I got pelted with stress balls in a live demonstration of a Distributed Denial of Service attack, we got a bonus security session where we were educated on how easy it is to compromise wifi and mobile phones (no mobile phones where hurt during the exercise, but TURN OFF WIFI WHEN YOU DON’T USE IT!), and finally a demonstration of Kali Linux.

Magic happened, nobody wanted to leave and we had to be kicked out by Hans at eleven…

Thanks to all the enthusiasm, a security workshop is being planned by WeSecureIT.nl for a future date! Stay tuned.

The Sweetlake PHP website is now on github. It is open source and will be online for about three weeks on the EngineYard platform. I tried to coax a free hosting account from EngineYard, but unfortunately they don’t have a programme for this, so the website remains a work in progress.

As mentioned previously, watch Georg’s keynote and my thoughts on it.

Keynote on Privacy by Georg Greve (you might have to refresh the page)

 

 

We’ve all heard about the NSA’s Prism program by now, and we’ve learned from the leaks by Edward Snowden what the Intelligence Agencies are up to regarding collection of data. What is surprising, Georg says, is that we’re surprised. Because it’s hardly news… the question is, are we going to do anything about it?

Google, Facebook, LinkedIn give you “free” products and then base their business model around selling data about your behaviour. (If you’re not paying for a service – you’re the product). A good example where you can experience this for yourself is Google’s customer support. Users say it’s slow, inaccessible, and in short, awful. But the truth is: Google’s customer support is very good! But you, the product, are just not entitled to it…

And ask yourself: can you truly allow your email to be parsed for “advertising purposes” when you’re receiving confidential information? Do you want Google to know about unhappy employees even before you do and so target them with ads of other opportunities?

Hans de Raad will be taking this further in our next session on December 6th. Don’t miss it!

 

Ramon de la Fuente; Father of two kids, one company and a user group called SweetlakePHP

Google Code-in 2013 and Google Summer of Code 2014 are on

2013 November 15

An invitation from Google!

 

gci-logo-300x200

A global online open source development & outreach contest
for precollege students ages 13-17

The Google Code-in contest gives students around the world an opportunity to explore the world of open source development. Google not only runs open source software throughout our business, we value the way the open source model encourages people to work together on shared goals over the Internet.

Give it a try from November 18th, 2013 to January 6th, 2014!

Participants complete “tasks” of their choice for a variety of open source software projects. Students can earn t-shirts and certificates for their work and 20 dedicated students (2 chosen by each software project) will win a trip to Google in Mountain View, CA, USA.

Since open source development is much more than just computer programming, there are lots of different kinds of tasks to choose from, broken out into 5 major categories:

1. Code: Writing or refactoring code
2. Documentation/Training: Creating and editing documentation and helping others learn
3. Outreach/Research: Community management and outreach/marketing, or studying problems and recommending solutions
4. Quality Assurance: Testing to ensure code is of high quality
5. User interface: User experience research or user interface design

The 10 open source organizations that students will be working with this year are: Apertium, BRLCAD, Copyleft Games Group, Drupal, Haiku, KDE, RTEMS, Sahana Software Foundation, Sugar Labs, and Wikimedia Foundation.

Over the past 3 years, 1238 students from 71 countries completed at least one task in the contest. This year we hope to have even more students participate globally. Please help us spread the word and bring more students into the open source family!

Visit googlemelange.com to read our Frequently Asked Questions for all the details on how to participate, to follow our blog, and to join the contest discussion list at http://groups.google.com/group/gcidiscuss for updates on the contest.

The Google Code-in contest starts on November 18, 2013, join the fun!

Make Things Do Stuff seek fresh young talent

2013 November 13

MTDSlogo

As you may or may not have heard yet, this Autumn, Things Do Stuff Make are looking to recruit a cohort of the freshest young tech talents and content creators from across the UK.

Now in the last week of recruitment (the deadline for applications is Monday 18th November 2013), we are looking to get the word out to as many young people as possible before the end of the week.

Coming together to explore the very frontiers of digital making, we will be supporting this group of 16-24 year olds to discover, document and share the most exciting tech innovations, events & product launches.

All they need to do to apply now is send over:

1) Name
2) Age
3) Location
4) A brief description of why they would like to be involved.

 

The deadline is Monday 18th November 2013

Full details can be found HERE on the Make Things Do Stuff website

 

Please be aware that this opportunity will be primarily carried out online and thus applicants can be from anywhere in the UK.

Furthermore, it can be conducted alongside school or college