A few years ago I met with Andrea Maietta and Paolo Aliverti, founders of Frankenstein Garage in Milan. Frankenstein Garage is a fab lab that came to life in May 2011 in front of a coffee machine – one of the most dangerous places in the world because it hosts conversations that can lead you to the most strange and unimaginable places and situations. Which is exactly what happened to Andrea and Paolo. Of course we discussed Make Magazines from Maker Media and in particular an issue in which Andrea found the famous cigar box guitar he built. I was in awe: this thing works! Andrea was kind enough to make me a copy, which is now proudly sitting in my lounge waiting for me to write this post and learn to play.
What do you need to build your cigar box guitar?
- One hardwood, knotless, long and narrow piece of wood, 102 cm x 4 cm x 2 cm for the neck (oak or maple works well)
- 2 small pieces of wood to be used as rest for the strings – is it a bridge?
- Strong cardboard cigar box, 4 cm, x 2 cm x 23 cm. Yes, the box has to be empty and we don’t care about the brand.
- Nuts and bolts of different sizes to be used as tuning pegs
- Elastic bands
- Different size plastic strings
- A drill
- Sanding paper
And now for building the guitar – it all looks very easy to me(!) -
- drill 6 holes on the baton for the strings (3 at each end)
- drill 3 larger holes for the bolts used as tuning pegs
- stick one small bridge at approximately 10 cm from the end of the neck
- place the other bridge at the other end
- stick the box on the back of the baton
- lastly add the strings and make sure they are tight enough to make a lovely sound when played.
I should add that Andrea and Paolo have now written a book – Il manuale del maker. La guida pratica e completa per diventare protagonisti della nuova rivoluzione industriale (The maker’s handbook – The practical and comprehensive guide to become protagonists of the new industrial revolution). Unfortunately it is in Italian so hopefully somebody somewhere will translate it. Watch this space!
Just got a teaser for this new conference –
Not only will I be there with a bunch of books that will sell at 40% discount but the organizers have offered a $100 discount to the friends of O’Reilly.To get this fantastic discount register here .
Some of our authors will be there including:
Dan North: who will give a full day tutorial called Accelerated Agile: from Months to Minutes as well as giving this talk Jackstones: the journey to mastery
Chad Fowler: McDonalds, Six Sigma, and Offshore Outsourcing: Unexpected Sources of Insight
Douglas Crockford: Managing Asynchronicity with RQ
Ian Robinson: Graph Search: The Power of Connected Data
Michael Nygard: Cooperating when the fur flies
Mitchell Hashimoto: Vagrant, Packer, Serf: Maximum Potency DevOps
You can find a complete list of speakers here.
Budapest here I come!
Hope to see you there.
It allows developers to add rich input and self-edited areas into web pages and web apps.
You can find it on www.quink.mobi and github.
Why did it get built?
The trigger was that I could not find a good solution for editing rich content on mobile, specifically on the iPad platform – at least not one that hit all the key points I was aiming for. We (IMD Business School) have had an iPad app in production for supporting our course participants since June 2010, and I wanted to move beyond plain text annotations on PDFs, notes and plain text discussion forums with file attachments. I wanted participants in our courses to be able to create richer content for themselves, for sharing with fellow participants, and for feedback to and from the professors.
Once started down that road, there are a host of decisions to make, top of the list being:
- data / file format
- editing capabilities
- separate app or in-app component
- openness to variety of use cases, or focus on a tight scope
There is more detail on these points below, but after initial consideration, I was looking for
- an HTML editor that works within the browser
- with a good UI/UX for basic rich text editing,
- that could be embedded in our app, and
- which had a good API and plug-in architecture to allow great flexibility.
So I started looking for solutions. While there are many things that fulfill some of the requirements, I simply could not find one that would work well on the iPad, the primary target. In the end it seemed to come down to a choice between doing significant work on somebody else’s architecture, all of which were designed for the desktop browser, or starting from scratch and focusing primarily on the mobile environment and our own needs. Even that decision is not a no-brainer. But the end result was that we followed the path that led to Quink.
I won’t go into all the alternatives considered here, but for me the decisions were reasonably clear:
Data format: HTML
For me, HTML is the only rational solution for the base format. It is the most versatile and widely used document format, has a really excellent track record of backward compatibility, and is free of proprietary control. If it seems a little strange, I guess that is just because people don’t think of it as a document format in the same category as pdf, word docs, and so on. The only thing it is really tricky to do is precise and locked-down control of presentation and layout. But in the multi-device world, I see that less as a critical feature than as a lurking problem. I would argue that the natural tendency of HTML to flow is more valuable, though more difficult to work with.
Separate app or in-app component.
The easy option would be to break out into specialist apps / editors. That makes extensibility simple, but it provides a really horrible user experience for many use cases and is simply unusable for some core requirements. Using 3rd party apps also creates all kinds of problems about cross-platform requirements, compatibility, and data management. For something as core as document creation and viewing, we needed something we could build into our app and our web portals, and be sure it would just work.
The minimum requirement was easy creation of ‘rich text’: some basic formatting for headings, lists, emphasis. The ability to include images. That’s really the core requirement, and covers 80% of immediate uses. But there is always the remaining 20%…
Tight focus or broad applicability
Beyond the basic capabilities, I knew from the start that there are a million things that will be required at some point: tables, graphs, vector graphics, video, audio, and just about anything you can imagine. When each of these will become important or critical is unknown – so the key requirement is to have something that is extensible in response to new demands and use cases. HTML and the web stack provide a good framework for allowing this, and for using components developed by others – both proprietary or open source. So our goal was to create an architecture where we could employ specialist content editors developed by others, out of the box. I always strive to create architectures that put as few limits as possible on the future without incurring unreasonable current costs – and I think we have achieved that.
Design – What is special?
Content divs and tagging
Probably the most significant thing about Quink is the approach to extensibility and plugins. The idea is that a page is made up of units, for each of which you may need a specialist editor. Quink exploits the HTML structure where a page is made up of a set of divs and elements. Divs provide clean boundaries for content. Some divs may be tagged to identify the editor functionality that is appropriate for editing them. If the user wishes to edit such tagged content, then the specialist editor is loaded, passed the content that it needs to edit, and the Quink core steps back to let it do its job. When it is finished, the modified content is updated in the HTML.
The base implementation of this was designed to allow the use of editors that have no knowledge of Quink. There are very, very few requirements that an editor must satisfy to allow it to be used as a plugin, and the system uses adapters so that the requirements on the underlying component are functional capabilities, not any specific API. To be eligible as a plugin, some editor need only:
- Be loadable in a web page
- To have some method of delivering the edited content to the Quink core so that it can be dropped into a page – ie renderable HTML.
- In order to be re-editable, it also needs a means of having the HTML sent to it.
Being loadable includes being hosted on different servers. Quink defaults to opening plugins in iFrames, and only loading them when asked for by the user. They don’t have to be part of the root site, so you dont actually need access to source, or to own the editor component. Of course, when setting up a plugin you should trust the provider and code enough to give them access to an iframe in your page!
The mechanisms for transferring the data can include all kinds of back-end tricks if needed, though we haven’t gone down that road ourselves yet. Quink supplies the user with a button to save and exit, or simply quit the plugin – which calls a function on to the plugin’s adapter, so the plugin editor itself does not need to know it is operating inside Quink or to emit any events.
If an editor is not capable of re-editing existing content it will not break anything either, though of course it may not meet user expectations.
Adding Quink to a page & configuration
Quink bootstraps from a small script which exists mainly to set up the urls to load from and kick Require.js into action. The default bootstrap script will do the job for most installations; the target page then only needs to have a one-line inclusion of the bootstrap script, and to declare one or more divs to be contenteditable; Quink is enabled as the editor for all of them by default.
Various aspects of Quink are easily configurable. A configuration can be set up by adjusting JSON files: the plugins, toolbars, and the keymap for keyboard-driven edit functions. One of the items on the roadmap is to allow these configuration structures to be cleanly manipulated after loading.
There are also a number of things which we have found it useful to allow Quink to pull from the page query parameters: the autosave frequency and destination, the destination for an explicit POST of the content as an alternate save mechanism, whether the toolbar should pop up on page load. This approach allows referring links to change aspects of the configuration which turn out to change more frequently than it is practical to change code, and seems to be a useful pattern. In future I think this will be extended and also made more generic – and capable of being disabled.
One of the other things which is a little unusual in WYSIWYG HTML editors, that we have included in Quink is keyboard commands. This was also driven by frustration as fairly heavy iOS users with the touch-based cursor positioning and selection. In my view this is one area where the Android UI is just miles better; but even then, trying to position the cursor and select text to replace, delete or format is slow and relatively tricky, because fine positioning is just inherently more difficult with a touch interface than with a mouse – and I am speaking as someone who has quite steady hands. From my past in mobile surveying and mapping, I know that quite a high percentage of the population have really quite shaky hands and find fine positioning on touch screens REALLY hard.
So we added an approach to allowing keyboard commands: the minimal target was simply keyboard-based navigation and selection, but the architecture delivered the ability to map a key sequence to any of the toolbar or internal commands. Because of the limitations of on-screen keyboards, we had to deliver this without control keys, and the best solution seemed to be to use a standard QWERTY key in some way. Following that line led pretty inevitably to requiring a command mode and an insert mode like Vi. This is really deeply ironic since I grew up as an emacs fan, and avoided Vi as much as I could, and now found myself forced to implement and learn vi-like sequences to achieve what I wanted.
Where we have ended up is with two ‘q’ keys in quick succession to enter command mode, and a single ‘q’ to return to normal, or ‘insert’ mode. The default map is not the same as vi, because of course many commands are more about formatting than rapid editing of plain text, but diehards can adjust to suit their tastes! To handle the limited set of keys, it is possible to set up command groups with ‘prefix’ keys, so for example we use ‘f’ for font formatting, so ‘fb’ means ‘format:toggle-bold’ and ‘fbi’ means ‘format:(toggle-bold, toggle-italic)’.
I have a long list of to-do’s.
Some should be relatively simple, such as adding a few more plug-ins; I particularly fancy the image editor Muro, not only because it seems really good, but because it is a hosted component that has the required functions, so it is also an interesting test case for the plugin architecture. After that, the next class of plugin to work on is grid/table support.
Good support of Android devices is certainly high on the list.
After that, there is some significant work to be done on div and css style management. Right now, Quink just exposes the browser behaviour for these areas, which is limited and often rather flaky. In principle this is all do-able, but doing it well with a clean architecture is an interesting challenge.
We have some other cool ideas, but they are in a phase of stealth mode experimentation just now.
We have released Quink under the Gnu Lesser GPL. The aim is to find a good balance between maximising the usefulness and user base around Quink (by being relatively liberal), and to get help and input from the community on improvements. Our current understanding of the lesser GPL with regard to Quink is that it allows people to use Quink in their apps and sites, or write add-ons and plugins without being obliged to open source everything – thereby maximising its usefulness. However, if there are bug fixes or compatibility fixes that people find and make to the core, it is the least they can do to publish them. It would be great if people become proactive and contribute/publish plugins, plugin adapters or other significant enhancements, but that is entirely voluntary.
Prior to joining IMD in 2008, Iain has worked in a variety of industries, but always at the forefront of technology development and disruptive change. In the early 90s, Iain co-founded a software start-up focused on mobile pen computing and geospatial solutions. The solution that he created enabled Ordnance Survey to be the world’s first mapping agency to use a 100% digital map collection and production system, and helped revolutionise the industry of creating and consuming geographic information.
National Hack the Government is an annual HackCamp run by Rewired State. As well as bringing great people together to have fun hacking and building things, we hope to improve transparency, open data and produce demo-able ideas that can be followed up and put into practice in real life. This is done by holding a competitive event for creating prototypes and building ingenious (and occasionally tongue-in-cheek) projects that help improve local and national services and make use of open data.
Now in its 6th year, an even bigger event has been put together to host local communities hacking around local data, issues and problems bringing in the National part in the title to life. A bar-camp has also been added on the Saturday to help share information and insight – to organically define some of the challenges that will be hacked on. This explains the title “Hack-Camp”!
The event will be followed up by a show-and-tell of the finalists’ work – taking the winners from each centre and allowing them to demo their ideas to the government, businesses and the wider community.
National Hack the Government is sponsored by:
Simpl Challenges is our innovation platform that connects local public services and organisations with innovators and ideas. Gathering ideas before the event on Simpl will give more time for learning, talking and listening to each other, and building some fantastic prototypes on the day itself.
Follow the links below to find out more and submit your ideas for your local event, or you can take part remotely by submitting your ideas to the UK section:
- UK: What are your ideas to code a better country?
- Glasgow: What is on your agenda for improving public services in Glasgow?
- Exeter: How can you use local data to hack Highways and Healthy Communities?
- Leeds: How can you visualise the city either for citizens, business, or both?
- Bournemouth: How can data improve the lives of people in Bournemouth?
You can also comment on the ideas submitted, so that the idea owners can get valuable feedback on their projects before the event has even started.
Rewired State creates bespoke hack events that bring creative developers, designers and industry experts together to solve real world problems – promoting and supporting more than 1 200 of the UK’s most talented and inventive software developers and designers, as well as nurturing 1 500 promising world-wide developers under 18 through the Young Rewired State network.
The 9th Annual PHP UK conference is over! Lots of people, a very posh venue, great food and very efficient and friendly organizers: Johanna Cherry, Sam Bell and Ciarán Rooney. What else do you need at the London PHP conference? Possibly some talks! I am told they were very good.
The conference was divided into 3 tracks, 27 world-wide known speakers and of course some beer events at the local pubs and Friday evening gathering in the exhibition hall.
For key notes we had:
- What Makes Technology Work by Juozas ‘Joe’ Kaziukenas – compared making wooden chairs to building apps
- The Future of PHP is in the Cloud by Glen Campbell – reviewed some of the key developments in PHP over the last few years and outlined how PHP can keep pace with the explosive growth of the cloud.
Some interesting talks included :
- PHP in Space by Derick Rethans – how PHP can be used for all kinds of terrestrial and non-terrestrial purposes … Expect trigonometry and other maths, and rocket science/explosions! It feels that it should have been great fun to attend this talk.
- Debugging HTTP by Lorna Mitchell – Curl, Wireshark and Charles, the tools you will want to have at hand. Most of the comments are “great talk, will be using Charles…”. I have a very special relationship with Lorna as she helps me marketing her book PHP Web Services to her audience. Well done Lorna!
- PHP at the Firehose Scale by Stuart Herbert – Here’s one to give the PHP bashers a well-deserved black eye! Twitter is one of the world’s best know social media sites, handling over 500 million public tweets a day (that’s around 6,000 tweets a second). How do they do it? With the help of PHP of course.
- If suffering from an information overdose and wanted to relax, you could then go to THE CLOUD BAR (Presented by Engine Yard) – for an awesome chill out space, free swag, free coffee, and free info about everything cloud.
- Or if feeling full of energy, you could join the HACKATHON (Presented by JetBrains) – alongside the main tracks and late into Friday night using Sochi Winter Olympics API data. Prizes were awarded throughout the Hackathon for the most innovative hacks including two free tickets to next year’s conference, PHP Storm licenses and more.
Make a note in your diary for the 10th London PHP Conference which I am sure will be even more awesome.
The two-day long biggest Open Source meeting in Europe is over! As you know, FOSDEM is a free event that offers open source communities a place to meet, share ideas and collaborate. It is renowned for being highly developer-oriented and brings together 5000+ geeks from all over the world.
O’Reilly has sponsored FOSDEM since 2002 when it changed its name from OSDEM to FOSDEM. So since 2002, I have been attending FOSDEM – not attending the talks but selling numerous O’Reilly books with my colleagues. Our table(s) is still in the H Block but the location has improved, we are now a few metres away from the doors which means it is no longer so cold. Even though we are very well looked after by the members of the organisation and by the delegates, providing us with tea and coffee, we still cannot have lunch until 4 pm as we are so busy. Please note: this is not a complaint just a fact. Thank you guys for bringing us food and drinks.
For the last couple of years, I started the weekend by going to the Delirium to meet some friends. The Delirium is a huge pub, two minutes from the Grand Place. I believe most of the drinks are sponsored so you can imagine the amount of people going there – it is not big enough to cater for everybody so lots of people are drinking, talking, greeting each other in the street. After an hour or less and a long wait for a coke, I had to leave as I was already thinking of getting up early to set up for the next day. This year we were incredibly lucky. Our 51 boxes of books were at the other end of the hall – not good. Our luck changed with the arrival of 3 guys coming from South Germany – unfortunately I do not know their names nor even the town they came from. At around 7 am on Saturday morning they came in the hall and said something like “it is cold outside and it is raining, can we stay in?” “But of course, no problem,” was the answer, “but you might have to help us with these boxes.” And those 3 great guys put the boxes on a very dilapidated trolley (several trips), brought the boxes and opened them for us. To me these three gentlemen epitomise FOSDEM – helping each other. For some of you FOSDEM is the meeting of great open source minds, for me it is the meeting of friends – some I see only once a year, some more often but always very happy to see each other. There I learn about your achievements, your dreams (one moved to Facebook in San Francisco, another one had a baby and so on). One man very proudly showed me one of the first O’Reilly/FOSDEM bags that we created years ago. With a lot of pride and very carefully, he got the bag out of his pocket and filled it with new books.
I will not bore you with a description of the content of FOSDEM, you can see that on Philip’s (Fosdem.org) video below.
Saturday evening we had dinner with the Perl mongers – again a very multinational gathering. As the dinner was for 50 people, I will only mention a few – Liz and Wendy, the famous Dutch duo who always sponsor these dinners; Ovid Poe, O’Reilly author; Laurent Boivins and Marc Chantreux, Perl France; Sawyer X who I hope will be interviewed soon about Dancer and published on this blog; Marian Marinov, from Sofia, who is organizing YAPC::Europe 2014 (22nd-24th August). I think I only talked about YAPC and how to make it an even greater conference.
Two of our authors came to see us:
Pieter and Anil, should you be reading this post, I can confirm that we sold all the copies of your books – thank you for the marketing assistance during your talks.
I met Sarah Novotny, co-chair of OSCON for the last couple of years – unfortunately being at FOSDEM, we did not have a chance to talk apart from a very brief greeting and see you later. I was also extremely happy to meet Constantin Dabro who is the leader of the Burkina Faso Java User Group.
On Sunday evening, after packing 7 boxes of leftover books, somebody told me that I was the mother of FOSDEM, then I thought I am not the mother but I feel like great-grandmother or somebody who had run a couple of marathons in two days :))
There is no typo in the title of this post. It intentionally reads ‘&&’. && is a boolean operator used in many programming and scripting languages. Why would an artist care about a boolean operator? That’s correct: because this particular artist is also a geek. “What’s a geek?”, other artists might ask. A geek is someone interested in tech (programming, devices, electronics), but also interested in social interaction.
The artist part in the title refers to the fact that I paint and sculpt, and the geek part is about my interest in php and ZendFramework and a few other things one can program with and for the web (like 3D scenes). To make things worse, I also like to write. Why worse? You guessed right again: it leaves me craving for time.
Can I really be an artist and a geek (and a writer) at the same time? Or will any or both suffer from the fact that a day has only a bare twenty-four hours? That’s where the boolean operator comes into play. It’s up to you: if you think I’m really an artist and I’m also really a geek, then both operands (Artist and Geek) evaluate to true (as geeks tend to say) and the entire expression will also evaluate to true, the entire expression meaning ‘Artist && Geek’.
Leaving it up to others if I’m either of the two saves me a lot of headaches and hopefully some time. So although I definitely think of myself as Artist && Geek, I shall not be bothered whether this title is true or false.
With that out of the way, I should come to answering Josette’s question that led to this post: what’s it like to be both?
Being an Artist and a Geek
The fact that I seem to be always fighting for time is not the most interesting part. A lot of people do and some even go on training courses to learn how to manage their time. I don’t. Why not? Because losing some time every now and then is part of being an artist. In general, it is part of being creative. What happens when you lose time? You lose time either because you are not paying attention to it, or because someone else is wasting it for you. Wasting? Not paying attention? Let me look into those two a bit further.
Not paying attention
Not paying attention reminds me of my teacher in second grade. She shouted at me when I sat in her classroom daydreaming. Is daydreaming a waste of time? Daydreaming is what creative people do to get ideas. Nowadays, I do not get enough time to daydream, because I have a day job as a self-employed geek and also because I’m addicted to typing code and watch it come to life, so I tend to code and write about coding during as many evenings as I can steal from the remainders of my social life. Whenever I do get a little time to daydream, it tends to be on the drive to work when the sun is rising from the early morning fog:
It is dangerous to daydream in the car while driving, so the dream only lasts long enough to capture a picture (a few milliseconds are sufficient) that I can paint later, on one of the rare days that I can spend in my art studio. I see other artists develop really creative and great ideas and I can tell that they spent a lot of time dreaming. I’m not a violent person, but it’s better that my second grade teacher and I do not ever meet again.
Can time really be wasted? If in this busy life, you have to wait for your dentist for half an hour because he needs a little extra time to help a new customer, is your time wasted? What would the Artist do with that time? He would look at the other people waiting. He would imagine what kind of lives they live, maybe even speak to them to find out. He would draw a sketch of the bored and waiting and later turn it into a painting. But what does the Geek do? He is prepared for this kind of situation: he takes out his MacBook Air that he can carry everywhere because it weighs next to nothing. And he starts coding, because he’s an addict, but also because he knows that since there is no pressure, he might get a better idea than when facing a tight deadline at work. One could state that at this point the Geek really gets in the way of the Artist. On the other hand, the Artist will help the Geek to be creative, since coding is considered art or poetry by many. Thinking about it, the Geek should have remembered to bring his iPad as well so that the Artist could draw on it.
The Artist and the Geek helping each other
While the above may seem unfair in that the Geek gets in the way of the Artist and the Artist is helping the Geek at typing code poetry, the Geek is also helping the Artist by supporting him financially. The first thing people want to know about an artist is how he gets by money-wise. In my case, the Geek is allowing the Artist to do anything he likes, that is, as long as time permits (which is, like we saw earlier, not very long). But spending a lot of time on a work of art is not necessarily making it better. The conception and execution of a painting may germinate for months or even years, or a painting can be born from being thrown at the canvas in an outburst of creativity after pressure-cooking in his mind for a considerable amount of time. All of this can be done independently. There is no need to sell any of these artworks, although most of them are for sale and many will eventually find a new owner.
For the above painting I did four different small sketches in different types of paint, only to figure out what should be on it and what would better be left out. It took me four years to decide.
There is something about today’s art that makes me question it a lot if it is done in a traditional way. This includes my own art. The famous artists from the past are admired because they found the most powerful ways available in their era to express themselves, or to express an idea of general interest. They also found new ways of expression. Some great artists were entrepreneurs, some successful, some not so successful. I think you can compare Rubens to Stephen Spielberg. Both orchestrated large scenes, using the most powerful visualization techniques of their time. Both didn’t do this alone.
Many say that a true artist chooses the best materials to express his idea. This can be said of both Rubens and Spielberg. That’s where my first problem lies: I tend to like certain materials, while I do not focus much on ideas. I like traditional paint and wood. So I mostly paint using traditional painting techniques and I sculpt in wood. Wrong approach! Today’s most powerful material is the byte. What? Yes, the byte. The fact you’ve read this far proves the power of bytes. Bytes become even more powerful if you create games with them. Games, especially 3D games, deliver a total experience that take possession of their player. It’s the most powerful and immersive way to express ideas currently known to mankind.
Yet the old techniques have not died completely. When you write a book, the reader immerses himself in it. While this requires effort from the reader, the immersion can be complete. The same is true for paintings, but they shall not be judged from their reproductions (displayed on either a computer screen or in print). A reproduction of a painting does not allow for immersion. It is simply impossible, believe it or not. The immersive qualities of a painting are crafted by the painter by means of continuous deployment. A painter steps back and forth in front of his work to test the immersive qualities. These differ depending on the distance to the work and the way the light falls on the painting. These qualities are completely absent from any type of reproduction.
With people staring into their tablets and smartphones all day, less people have time to go out and see actual paintings, let alone immerse themselves in them. Research has proved that if they go to a museum to look at art at all, they look for 9,000 milliseconds on average (it hurts too much to write this number down expressed in seconds, or even hours, so I’ll leave that as an exercise to the reader). Therefore, I fear for the future of painting in the traditional sense of the word and I am glad that I also have some power over bytes: the most powerful raw material of the modern age.
Some encouraging thoughts
I think guaranteed ways to waste time exist. It is a waste of time to do repetitive work. You’ve done it before, it is not a new experience. It is detrimental to your creativity and brainpower. Repetitive work should be automated, either by machines or by coding.
Although bytes have a lot of power, nothing compares to recreating an atmosphere as he has experienced it in a landscape, in the opinion of the Artist, with his own hands and traditional paint. This requires vision, experience, speed, creativity and courage. His own judgement is the harshest he can get. Nothing compares to holding an idea in his hands and turning it around, knowing that he has shaped this out of wood that didn’t really want to be shaped: wood that resisted like mad, but had to give in to his insistent chiseling.
What about the other Perl frameworks, Dancer and Mojolicious? How do they compare to Catalyst?
Dancer’s big strength is making things quick and easy for smaller apps; you don’t have to think in terms of OO unless you want to and plugins generally shove a bunch of extra keywords into your namespace that are connected to global or per-request variables. Where Catalyst doesn’t have an exact opinion about a lot of the structure of your code but very definitely insists that you pick one and implement it, Dancer basically lets you do whatever you like and not really think too much about it.
That really isn’t meant as a criticism: somewhere along the line I picked up a commit bit to Dancer as well and they’ve achieved some really good things – providing something that’s as little conceptual overhead as possible for smaller apps, and something where there’s a very direct mapping between the concepts involved and what’s actually going to happen in terms of request dispatch, whereas Catalyst abstracts things more thoroughly, so there’s a trade-off there. I mean, I was saying before that empty methods with route annotations almost always end up getting some code in them eventually.If you get to 1.0 and most of those methods are still empty, you might’ve been able to write a lot less code or at least do a lot less thinking that turned out not to have been necessary if you’d used Dancer instead. Equally, I’ve seen Dancer codebases that have got complicated enough to turn into a gnarly, tangled mess and the developers are looking and thinking, “You know, maybe I was wrong about Catalyst being overkill…”
I love the accessibility of Dancer though, and the team are great guys. I’ve seen the catalyst community send people who’re clearly lost trying to scale the learning curve to use Dancer instead, and I’ve seen the Dancer community tell people they’re doing enough complicated things at once to go look at Catalyst insteadand hey, we both run on top of PSGI/Plack so you can have /admin served by Dancer and everything else by Catalyst … or the other way around … or whatever.
Meantime, Mojolicious is taking its trade-offs in a different dimension.Sebastian Riedel, the project founder, was also founder of Catalyst; he left the Catalyst project just before the start of, I think, the 5.70 release cycle, because we’d acquired a lot of users who’d bet the business on Catalyst’s stability at that point, and a lot of contributors who thought that was OK, and Sebastian got really, really frustrated at the effort involved in maintaining backwards compatibility.
So he went away and rethought everything, and Mojo has ended up having its own implementations of a lot of things, focused on where they think people’s needs in web development are going over the next few years. A company heavily using Mojo open sourced a real-time IRC web client recently, doing a lot of clever stuff, and Mojolicious helped them with that substantially. But the price they end up paying is that when you step outside the ecosystem it’s quite jarring, because the standards and conventions aren’t quite the same as the rest of modern Perl. Mojolicious has a very well-documented staged backcompat breakage policy which they stick to religiously, and for “move fast and break stuff” style application development, I think they’ve got the policy pretty much spot on and they’re reaping a lot of advantages from it
But for a system where you want to be able to ignore a section that users are happy with and then pick it up down the line when the state of the world changes and it needs extending (for example, for the people doing boring business systems where finding out 5 seconds sooner that somebody edited somethingwould be nice to have) but what really matters is that the end result is correct, then I’m not sure I’m quite so fond of the approach. I think if you were going to talk about a typical backend for each framework, you could say that Dancer’s would be straight SQL talking to MySQL, Catalyst’s would be some sort of ORM talking to PostgreSQL, and Mojolicious’ would be a client library of some sort talking to MongoDB. Everybody’s going to see some criticism of the other two implicit in each of what I said, but take it as a compliment to their favourite: if they don’t, that’s because my metaphor failed rather than anything else
I can’t recall a time when I’ve seen an app that was a reasonable example of its framework where I really thought that either of the other two would’ve worked out nearly as well for them... with the exception of a fairly small Catalyst app that, in spite of being, if anything, a bit small for Catalyst to make sense turned into a crawling horror when ported to Mojolicious – but then again, when I showed that to Sebastian and asked, “am I missing something here?”, the only reply I got was some incoherent screaming, the underlying meaning being, “WHAT HAVE THEY DONE TO MY POOR FRAMEWORK?! CANNOT UNSEE, CANNOT UNSEE!” So I think, as with Dancer, it’s a matter of there being more than one way to do it, and one or other of them is going to be more reasonable depending on the application.
What’s in Catalyst’s wish-list, in which direction is the project moving and do you think that someday Catalyst’s adoption will be so widespread that it will become the reason for restoring the P=Perl back to LAMP?
The basic goals at the moment are a mixture of adding convenience features for situations that are common enough now to warrant them but weren’t, say, five years ago; continuing to refactor the core to enable easier and cleaner extension; and to figure out a path forwards that lets us clean up the API to push new users onto the best paths while not punishing people that use older approaches.
As for putting the P back in LAMP? I’ve regarded it as standing for “Perl, PHP or Python” for as long as I can remember. It doesn’t seem to me that treating it as a zero sum game is actually useful: in the world of open source, crushing your enemies might be satisfying but encouraging them and then stealing all their best ideas seems like much more fun to me.
What are your thoughts on Perl 6 and given the opportunity, would you someday re-write Catalyst in it?
I’ve spent a fair amount of time and energy over the years making sure that the people thinking hard about language design for both the Perl5 and Perl6 languages talk to each other and share ideas and experiences reasonably often, but I’m perfectly comfortable with Perl5 as my primary production language for the moment, and so long as the people who actually know what they’re doing with this stuff are paying attention I don’t feel the need to that much.
One of the things I’m really hoping works out is the whole MoarVM plan, wherein Rakudo will end up with a solid virtual machine that was designed from the start to be able to embed libperl and thereby call back and forth between the languages. So if that plan comes off, then I don’t think you’d ever write Catalyst in Perl6 so much as you could write parts of Catalyst apps in Perl6 if you wanted to… and maybe one day there’d be something that uses features that are uniquely Perl6-like that turns out to be technologically more awesome. You can still write parts of those apps in Perl5 if it makes sense, but I don’t think looking at the two languages in the Perl family as some sort of competition is that useful. I much prefer a less dogmatic approach, similar to the saner of the people I know who are into various Lisp dialects.
So it’s more about experimenting in similar spaces and learning and sharing things – and while being a language family is often cited as a reason why Lisp never took over the world … Perl taking over the world got us Matt’s Script Archive and a generation of programmers who thought the language was called PERL and fit only for generating write-only line noise whereas being a language family seems to have pretty effectively given Lisp immortality, albeit a not-entirely-mainstream sort of immortality.
I think, over a long enough timeline, I could pretty much live with that (absent a singularity or something I’ll probably be dead in about the number of years that Lisp has existed), and I think if there is a singularity then programming languages afterwards won’t look anything like they do now… although, admittedly, I still wouldn’t be surprised if my favorite of whatever they do look like was designed by Larry Wall.
Nikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He is interested in anything related to the RDBMS and loves crafting complex SQL queries for generating reports. As a journalist, he writes articles, conducts interviews and reviews technical IT books
Does all that flexibility come at a price?
The key price is that while there are common ways to do things, you’re rarely going to find One True Way to solve any given problem. It’s more likely to be “here’s half a dozen perfectly reasonable ways, which one is best probably depends on what the rest of your code looks like”, plus while there’s generally not much integration specific code involved, everything else is a little more DIY than most frameworks seem to require.
I can put together a catalyst app that does something at least vaguely interesting in a couple hours, but doing the sort of 5 minute wow moment thing that intro screencasts and marketing copy seem to aim for just doesn’t happen, and often when people first approach catalyst they tend to get a bit overwhelmed by the various features and the way you can put them together.
There’s a reflex of “this is too much, I don’t need this!”. But then a fair percentage of them come back two or three years later, have another look and go “ah, I see why I want all these features now: I’d’ve written half as much code since I thought I didn’t need all Catalyst features”. Similarly the wow moment is usually three months or six months into a project, when you realise that adding features is still going quickly because the code’s naturally shaken out into a sensible structure
So, there’s quite a bit of learning, and it’s pretty easy for it to look like overkill if you haven’t already experienced the pain involved. It’s a lot like the use strict problem writ large – declaring variables with my inappropriate scopes rather than making it up as you go along is more thinking and more effort to begin with, so it’s not always easy to get across that it’s worth it until the prospective user has had blue daemons fly out of his nose a couple of times from mistakes a more structured approach would’ve avoided.
So, it’s flexibility at the expense of a steep learning curve, but apart from that, if I could compare Catalyst to Rails, I would say that Rails tries to be more like a shepherd guiding the herd the way it thinks is the right one or the way they should go, while Catalyst allows room to move and make your own decisions. Is that a valid interpretation ?
It seems to me that Rails is very much focused on having opinions, so there’s a single obvious answer for all the common cases. Where you choose not to use a chunk of the stack, whatever replaces it is similarly a different set of opinions, whereas Catalyst definitely focuses on apps that are going to end up large enough to have enough weird corners that you’re going to end up needing to take your own choices. So Rails is significantly better at making easy things as easy as possible but Catalyst seems to do better at making hard things reasonably natural if you’re willing to sit down and think about it.
I remember talking to a really smart Rails guy over beer at a conference (possibly in Italy) and the two things I remember the most were him saying “my customers’ business logic just isn’t that complicated and Rails makes it easy to get it out of the way so I can focus on the UI”, and when I talked about some of the complexities I was dealing with, his first response was, “wait, you had HOW many tables?”.
But surely, like Rails, Catalyst offers functionality out of the box too. What tasks does Catalyst take care of for me and which ones require manual wiring?
There’s a huge ecosystem of plugins, extensions and so forth in both cases but there’s a stylistic difference involved. Let me talk about the database side of things a sec, because I’m less likely to get the Rails-side part completely wrong.
Every Rails tutorial I’ve ever seen begins “first, you write a migration script that creates your table” … and then once you’ve got the table, your class should just pick up the right attributes because of the columns in the database, and that’s … that’s how you start, unless you want to do something non-standard (which I’m sure plenty of people do, but it’s an active deviation from the default), whereas you start a catalyst app, and your first question is “do I even want to use a database here?”
Then, assuming you do, DBIx::Class is probably a default choice, but if you’ve got a stored procedure oriented database to interface to, you probably don’t need it, and for code that’s almost all insanely complex aggregates, objects really aren’t a huge win but let’s assume you’ve gone for DBIx::Class
Now you ask yourself “do I want Perl to own the database, or SQL?” In the former case you’ll write a bunch of DBIx::Class code representing your tables, and then tell it to create them in the database; in the latter you’ll create the tables yourself and then generate the DBIx::Class code from the database. There’s not exactly an opinion of which is best: generally, I find that if a single application owns the database then letting the DBIx::Class code generate the schema is ideal, but if you’ve got a database that already exists that a bunch of other apps talk to as well, you’re probably better having the schema managed some way between the teams for all those apps and generating the DBIx::Class code from a scratch database.
Both of those are pretty much first class approaches, and y’know, if your app owns the database but you’ve already got a way of versioning the schema that works, then I don’t see why I should stop you from doing that and generating the DBIC code anyway. So it’s not so much about whether manual wiring is required for a particular task or not but how much freedom you have to pick an approach to a task, and how many decisions does that freedom require before you know how to fire off the relevant code to set things up. I mean, whether you classify code as boilerplate or not depends on whether you ever foresee wanting to change it.
So when you first create a catalyst controller, you often end up with methods that participate in dispatch – have routing information attached to them – but are completely empty of code which tends to look a little bit odd, so you often get questions from newbies of “why do I need to have a method there when it doesn’t do anything?”, but then you look at this code again when you’re getting close to feature complete, and almost all of those methods have code in them now because that’s how the logic tends to fall out.
There’s two reasons why that’s actively a good thing: first, that because there was already a method there even if it was a no-op to begin with, the fact it’s a method is a big sign saying “it’s totally OK to put code here if it makes sense”, which is a nice reminder, and makes it quite natural to structure your code in a way that flow nicely and secondly, once you figure it out in total, any other approach would involve time to declare non-method route plus time to redeclare all routes that got logic as methods and if most of your methods end up with code in them then that means that overall, for reasonably complex stuff, the Catalyst style ends up being less typing than anything else would be. But again we’re consciously paying a little bit more in terms of upfront effort as you’re starting to enable maintainability down the road
It’s easy to forget that Catalyst is not just a way of building sites, but also a big, big project in software architecture/engineering terms, built with best practices in mind.
Well, it is and it isn’t: there’s quite a lot of code in there that’s actually there to support not best practices, but not forcing people to rewrite code until they’re adding features to it, since if you’ve got six years’ worth of development and a couple hundred tables’ worth of business model, “surprise! we deleted a bunch of features you were using!” isn’t that useful, even when those features only existed because our design sucks (in hindsight, at least).
I’d say, yeah, that things like Chained dispatch and the adaptor model and the support for roles and traits pretty much everywhere enables best practices as we currently understand them. But there’s also a strong commitment to only making backwards incompatible changes when we really have to, because the more of those we make, the less likely people are to upgrade to a version of Catalyst that makes it easy to write new code in a way that sucks less (or, at least, differently).
But there’s a strong sense in the ecosystem and in the way the community tends to do things of trying to make it possible to do things as elegantly as possible even with a definition of elegant that evolves over time. So you might wish that your code from, say, 2008, looked a lot more like the code you’re writing in 2013, but they can coexist just fine until there’s a big features push on the code from 2008 and then you refactor and modernise as you and we’ve always had a bias towards modernization, so things can be done as extensions, and prioritising making it more possible to do more things as extensions, than adding things into the core.
So, for example, metacpan.org is a catalyst app using elasticsearch as a backend and people are using assorted other non-relational things and getting on just fine … and back in 2006, the usual ORM switched from Class::DBI to DBIx::Class and it wasn’t a big deal (though DBIx::Class got featureful enough that people’s attempts to obsolete it have probably resulted in more psychiatric holds than they have CPAN releases) and a while back we swapped out our own Catalyst::Engine:: system for Plack code implementing the PSGI spec, and that wasn’t horribly painful and opened up a whole extra ecosystem (system for handling HTTP environment abstraction).
Even in companies conservative enough to be still running 5.8.x Perl, most of the time you still tend to find that they’ve updated the ecosystem to reasonably recent versions, so they’re sharing the same associated toolkits as the new build code in Greenfield projects. So we try and avoid ending up too out of date without breaking existing production code gratuitously, and nudge people towards more modern patterns of use and not interfere with people who love the bleeding edge, but not force that on the users we have who don’t either. So sometimes things take longer to land than people might like. There’s a lot of stuff to understand, but if you’re thinking in terms of core business technology rather than hacking something out that you’ll rewrite entirely in a year when Google buys you or whatever, I think it’s a pretty reasonable set of trade-offs.
In the forthcoming third and last part of the interview, we talk about the the other Perl frameworks Dancer and Mojolicious, in which direction is the project moving, and whether Perl 6 is a viable option for Web development
Nikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He is interested in anything related to the RDBMS and loves crafting complex SQL queries for generating reports. As a journalist, he writes articles, conducts interviews and reviews technical IT books