All about Catalyst – interview of Matt S. Trout (Part 3 of 3)

What about the other Perl frameworks, Dancer and Mojolicious? How do they compare to Catalyst?

Dancer’s big strength is making things quick and easy for smaller apps; you don’t have to think in terms of OO unless you want to and plugins generally shove a bunch of extra keywords into your namespace that are connected to global or per-request variables. Where Catalyst doesn’t have an exact opinion about a lot of the structure of your code but very definitely insists that you pick one and implement it, Dancer basically lets you do whatever you like and not really think too much about it.

That really isn’t meant as a criticism: somewhere along the line I picked up a commit bit to Dancer as well and they’ve achieved some really good things providing something that’s as little conceptual overhead as possible for smaller apps, and something where there’s a very direct mapping between the concepts involved and what’s actually going to happen in terms of request dispatch, whereas Catalyst abstracts things more thoroughly, so there’s a trade-off there. I mean, I was saying before that empty methods with route annotations almost always end up getting some code in them eventually.If you get to 1.0 and most of those methods are still empty, you might’ve been able to write a lot less code or at least do a lot less thinking that turned out not to have been necessary if you’d used Dancer instead. Equally, I’ve seen Dancer codebases that have got complicated enough to turn into a gnarly, tangled mess and the developers are looking and thinking, “You know, maybe I was wrong about Catalyst being overkill…”

I love the accessibility of Dancer though, and the team are great guys. I’ve seen the catalyst community send people who’re clearly lost trying to scale the learning curve to use Dancer instead, and I’ve seen the Dancer community tell people they’re doing enough complicated things at once to go look at Catalyst insteadand hey, we both run on top of PSGI/Plack so you can have /admin served by Dancer and everything else by Catalyst … or the other way around … or whatever.

Meantime, Mojolicious is taking its trade-offs in a different dimension.Sebastian Riedel, the project founder, was also founder of Catalyst; he left the Catalyst project just before the start of, I think, the 5.70 release cycle, because we’d acquired a lot of users who’d bet the business on Catalyst’s stability at that point, and a lot of contributors who thought that was OK, and Sebastian got really, really frustrated at the effort involved in maintaining backwards compatibility.

So he went away and rethought everything, and Mojo has ended up having its own implementations of a lot of things, focused on where they think people’s needs in web development are going over the next few years. A company heavily using Mojo open sourced a realtime IRC web client recently, doing a lot of clever stuff, and Mojolicious helped them with that substantially. But the price they end up paying is that when you step outside the ecosystem it’s quite jarring, because the standards and conventions aren’t quite the same as the rest of modern Perl. Mojolicious has a very well-documented staged backcompat breakage policy which they stick to religiously, and for “move fast and break stuff” style application development, I think they’ve got the policy pretty much spot on and they’re reaping a lot of advantages from it

But for a system where you want to be able to ignore a section that users are happy with and then pick it up down the line when the state of the world changes and it needs extending (for example, for the people doing boring business systems where finding out 5 seconds sooner that somebody edited somethingwould be nice to have) but what really matters is that the end result is correct, then I’m not sure I’m quite so fond of the approach. I think if you were going to talk about a typical backend for each framework, you could say that Dancer’s would be straight SQL talking to MySQL, Catalyst’s would be some sort of ORM talking to PostgreSQL, and Mojolicious’ would be a client library of some sort talking to MongoDB. Everybody’s going to see some criticism of the other two implicit in each of what I said, but take it as a compliment to their favourite: if they don’t, that’s because my metaphor failed rather than anything else

I can’t recall a time when I’ve seen an app that was a reasonable example of its framework where I really thought that either of the other two would’ve worked out nearly as well for them... with the exception of a fairly small Catalyst app that, in spite of being, if anything, a bit small for Catalyst to make sense turned into a crawling horror when ported to Mojolicious but then again, when I showed that to Sebastian and asked, “am I missing something here?”, the only reply I got was some incoherent screaming, the underlying meaning being, “WHAT HAVE THEY DONE TO MY POOR FRAMEWORK?! CANNOT UNSEE, CANNOT UNSEE! So I think, as with Dancer, its a matter of there being more than one way to do it, and one or other of them is going to be more reasonable depending on the application.

What’s in Catalyst’s wish-list, in which direction is the project moving and do you think that someday Catalyst’s adoption will be so widespread that it will become the reason for restoring the P=Perl back to LAMP?

 The basic goals at the moment are a mixture of adding convenience features for situations that are common enough now to warrant them but weren’t, say, five years ago; continuing to refactor the core to enable easier and cleaner extension; and to figure out a path forwards that lets us clean up the API to push new users onto the best paths while not punishing people that use older approaches.

 As for putting the P back in LAMP? I’ve regarded it as standing for “Perl, PHP or Python” for as long as I can remember. It doesn’t seem to me that treating it as a zero sum game is actually useful: in the world of open source, crushing your enemies might be satisfying but encouraging them and then stealing all their best ideas seems like much more fun to me.

What are your thoughts on Perl 6 and given the opportunity, would you someday re-write Catalyst in it?

 I’ve spent a fair amount of time and energy over the years making sure that the people thinking hard about language design for both the Perl5 and Perl6 languages talk to each other and share ideas and experiences reasonably often, but I’m perfectly comfortable with Perl5 as my primary production language for the moment, and so long as the people who actually know what they’re doing with this stuff are paying attention I don’t feel the need to that much.

 One of the things I’m really hoping works out is the whole MoarVM plan, wherein Rakudo will end up with a solid virtual machine that was designed from the start to be able to embed libperl and thereby call back and forth between the languages. So if that plan comes off, then I don’t think you’d ever write Catalyst in Perl6 so much as you could write parts of Catalyst apps in Perl6 if you wanted to… and maybe one day there’d be something that uses features that are uniquely Perl6-like that turns out to be technologically more awesome. You can still write parts of those apps in Perl5 if it makes sense, but I don’t think looking at the two languages in the Perl family as some sort of competition is that useful. I much prefer a less dogmatic approach, similar to the saner of the people I know who are into various Lisp dialects.

So it’s more about experimenting in similar spaces and learning and sharing things – and while being a language family is often cited as a reason why Lisp never took over the world … Perl taking over the world got us Matt’s Script Archive and a generation of programmers who thought the language was called PERL and fit only for generating write-only line noise whereas being a language family seems to have pretty effectively given Lisp immortality, albeit a not-entirely-mainstream sort of immortality.

 I think, over a long enough timeline, I could pretty much live with that (absent a singularity or something I’ll probably be dead in about the number of years that Lisp has existed), and I think if there is a singularity then programming languages afterwards won’t look anything like they do now… although, admittedly, I still wouldn’t be surprised if my favorite of whatever they do look like was designed by Larry Wall.

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He is interested in anything related to the RDBMS and loves crafting complex SQL queries for generating reports. As a journalist, he writes articles, conducts interviews and reviews technical IT books

All about Catalyst – interview of Matt S. Trout (Part 2 of 3)

Does all that flexibility come at a price?

The key price is that while there are common ways to do things, you’re rarely going to find One True Way to solve any given problem. It’s more likely to be “here’s half a dozen perfectly reasonable ways, which one is best probably depends on what the rest of your code looks like”, plus while there’s generally not much integration specific code involved, everything else is a little more DIY than most frameworks seem to require.

I can put together a catalyst app that does something at least vaguely interesting in a couple hours, but doing the sort of 5 minute wow moment thing that intro screencasts and marketing copy seem to aim for just doesn’t happen, and often when people first approach catalyst they tend to get a bit overwhelmed by the various features and the way you can put them together.

There’s a reflex of “this is too much, I don’t need this!”. But then a fair percentage of them come back two or three years later, have another look and go “ah, I see why I want all these features now: I’d’ve written half as much code since I thought I didn’t need all Catalyst features”. Similarly the wow moment is usually three months or six months into a project, when you realise that adding features is still going quickly because the code’s naturally shaken out into a sensible structure

So, there’s quite a bit of learning, and it’s pretty easy for it to look like overkill if you haven’t already experienced the pain involved. It’s a lot like the use strict problem writ large – declaring variables with my inappropriate scopes rather than making it up as you go along is more thinking and more effort to begin with, so it’s not always easy to get across that it’s worth it until the prospective user has had blue daemons fly out of his nose a couple of times from mistakes a more structured approach would’ve avoided.

So, it’s flexibility at the expense of a steep learning curve, but apart from that, if I could compare Catalyst to Rails, I would say that Rails tries to be more like a shepherd guiding the herd the way it thinks is the right one or the way they should go, while Catalyst allows room to move and make your own decisions. Is that a valid interpretation ?

It seems to me that Rails is very much focused on having opinions, so there’s a single obvious answer for all the common cases. Where you choose not to use a chunk of the stack, whatever replaces it is similarly a different set of opinions, whereas Catalyst definitely focuses on apps that are going to end up large enough to have enough weird corners that you’re going to end up needing to take your own choices. So Rails is significantly better at making easy things as easy as possible but Catalyst seems to do better at making hard things reasonably natural if you’re willing to sit down and think about it.

I remember talking to a really smart Rails guy over beer at a conference (possibly in Italy) and the two things I remember the most were him saying “my customers’ business logic just isn’t that complicated and Rails makes it easy to get it out of the way so I can focus on the UI”, and when I talked about some of the complexities I was dealing with, his first response was, “wait, you had HOW many tables?”.

So while they share, at least very roughly, the same sort of view of MVC, they’re optimised very differently in terms of user affordances for developers working with them. It wasn’t so long back somebody I know who’s familiar with Perl and Ruby was talking to me about a new project. I ended up saying: “Build the proof of concept with rails, and then if the logic’s complicated enough to make you want to club people to death with a baby seal, point the DBIx::Class schema introspection tool at your database and switch to Catalyst at that point”.

But surely, like Rails, Catalyst offers functionality out of the box too. What tasks does Catalyst take care of for me and which ones require manual wiring?

There’s a huge ecosystem of plugins, extensions and so forth in both cases but there’s a stylistic difference involved. Let me talk about the database side of things a sec, because I’m less likely to get the Rails-side part completely wrong.

Every Rails tutorial I’ve ever seen begins “first, you write a migration script that creates your table” … and then once you’ve got the table, your class should just pick up the right attributes because of the columns in the database, and that’s … that’s how you start, unless you want to do something non-standard (which I’m sure plenty of people do, but it’s an active deviation from the default), whereas you start a catalyst app, and your first question is “do I even want to use a database here?”

Then, assuming you do, DBIx::Class is probably a default choice, but if you’ve got a stored procedure oriented database to interface to, you probably don’t need it, and for code that’s almost all insanely complex aggregates, objects really aren’t a huge win but let’s assume you’ve gone for DBIx::Class

Now you ask yourself “do I want Perl to own the database, or SQL?” In the former case you’ll write a bunch of DBIx::Class code representing your tables, and then tell it to create them in the database; in the latter you’ll create the tables yourself and then generate the DBIx::Class code from the database. There’s not exactly an opinion of which is best: generally, I find that if a single application owns the database then letting the DBIx::Class code generate the schema is ideal, but if you’ve got a database that already exists that a bunch of other apps talk to as well, you’re probably better having the schema managed some way between the teams for all those apps and generating the DBIx::Class code from a scratch database.

Both of those are pretty much first class approaches, and y’know, if your app owns the database but you’ve already got a way of versioning the schema that works, then I don’t see why I should stop you from doing that and generating the DBIC code anyway. So it’s not so much about whether manual wiring is required for a particular task or not but how much freedom you have to pick an approach to a task, and how many decisions does that freedom require before you know how to fire off the relevant code to set things up. I mean, whether you classify code as boilerplate or not depends on whether you ever foresee wanting to change it.

So when you first create a catalyst controller, you often end up with methods that participate in dispatch – have routing information attached to them – but are completely empty of code which tends to look a little bit odd, so you often get questions from newbies of “why do I need to have a method there when it doesn’t do anything?”, but then you look at this code again when you’re getting close to feature complete, and almost all of those methods have code in them now because that’s how the logic tends to fall out.

There’s two reasons why that’s actively a good thing: first, that because there was already a method there even if it was a no-op to begin with, the fact it’s a method is a big sign saying “it’s totally OK to put code here if it makes sense”, which is a nice reminder, and makes it quite natural to structure your code in a way that flow nicely and secondly, once you figure it out in total, any other approach would involve time to declare non-method route plus time to redeclare all routes that got logic as methods and if most of your methods end up with code in them then that means that overall, for reasonably complex stuff, the Catalyst style ends up being less typing than anything else would be. But again we’re consciously paying a little bit more in terms of upfront effort as you’re starting to enable maintainability down the road

It’s easy to forget that Catalyst is not just a way of building sites, but also a big, big project in software architecture/engineering terms, built with best practices in mind.

Well, it is and it isn’t: there’s quite a lot of code in there that’s actually there to support not best practices, but not forcing people to rewrite code until they’re adding features to it, since if you’ve got six years’ worth of development and a couple hundred tables’ worth of business model, “surprise! we deleted a bunch of features you were using!” isn’t that useful, even when those features only existed because our design sucks (in hindsight, at least).

I’d say, yeah, that things like Chained dispatch and the adaptor model and the support for roles and traits pretty much everywhere enables best practices as we currently understand them. But there’s also a strong commitment to only making backwards incompatible changes when we really have to, because the more of those we make, the less likely people are to upgrade to a version of Catalyst that makes it easy to write new code in a way that sucks less (or, at least, differently).

But there’s a strong sense in the ecosystem and in the way the community tends to do things of trying to make it possible to do things as elegantly as possible even with a definition of elegant that evolves over time. So you might wish that your code from, say, 2008, looked a lot more like the code you’re writing in 2013, but they can coexist just fine until there’s a big features push on the code from 2008 and then you refactor and modernise as you and we’ve always had a bias towards modernization, so things can be done as extensions, and prioritising making it more possible to do more things as extensions, than adding things into the core.

So, for example, metacpan.org is a catalyst app using elasticsearch as a backend and people are using assorted other non-relational things and getting on just fine … and back in 2006, the usual ORM switched from Class::DBI to DBIx::Class and it wasn’t a big deal (though DBIx::Class got featureful enough that people’s attempts to obsolete it have probably resulted in more psychiatric holds than they have CPAN releases) and a while back we swapped out our own Catalyst::Engine:: system for Plack code implementing the PSGI spec, and that wasn’t horribly painful and opened up a whole extra ecosystem (system for handling HTTP environment abstraction).

Even in companies conservative enough to be still running 5.8.x Perl, most of the time you still tend to find that they’ve updated the ecosystem to reasonably recent versions, so they’re sharing the same associated toolkits as the new build code in Greenfield projects. So we try and avoid ending up too out of date without breaking existing production code gratuitously, and nudge people towards more modern patterns of use and not interfere with people who love the bleeding edge, but not force that on the users we have who don’t either. So sometimes things take longer to land than people might like. There’s a lot of stuff to understand, but if you’re thinking in terms of core business technology rather than hacking something out that you’ll rewrite entirely in a year when Google buys you or whatever, I think it’s a pretty reasonable set of trade-offs.

In the forthcoming third and last part of the interview, we talk about the the other Perl frameworks Dancer and Mojolicious, in which direction is the project moving, and whether Perl 6 is a viable option for Web development

 

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He is interested in anything related to the RDBMS and loves crafting complex SQL queries for generating reports. As a journalist, he writes articles, conducts interviews and reviews technical IT books

All about Catalyst – interview of Matt S. Trout (Part 1 of 3)

CatBot-goggles-v-02-shadowcat-200We talk to Matt S. Trout, technical team leader at consulting firm Shadowcat Systems Limited, creator of the DBIx::Class ORM and of many other CPAN modules, and of course co-maintainer of the Catalyst web framework. These are some of his activities, but for this interview we are interested in Matt’s work with Catalyst.

Our discussion turned out not to be just about Catalyst though. While discussing the virtues of the framework, we learned, in Matt’s own colourful language, what makes other popular web frameworks tick, managed to bring the consultant out of him who shared invaluable thoughts on architecting software as well as on the possibility of Perl 6 someday replacing Perl 5 for web development.

We concluded that there’s no framework that wins by knockout, but that the game’s winner will be decided on points, points given by the final judge, your needs.

So, Matt, let’s start with the basics. Catalyst is a MVC framework. What is the MVC pattern and how does Catalyst implement it?matt-screen-01-200px

The fun part about MVC is that if you go through a dozen pages about it on Google you’ll end up with at least ten different definitions. The two that are probably most worthwhile paying attention to are the original and the Rails definitions.

The original concept of MVC came out of the Xerox PARC work and was invented for Smalltalk GUIs. It posits a model which is basically data that you’re live-manipulating, a view which is responsible for rendering that, and a controller which accepts user actions.

The key thing about it was that the view knew about the model, but nothing else. The controller knew about the model and the view, while the model was treated like a mushroom – kept in the dark; the view/controller classes handled changes to the model by using the observer pattern, so an event got fired when they changed (you’ll find that angular.js, for example, works on pretty much this basis – it’s very much a direct-UI-side pattern).

Now, what Rails calls MVC (and, pretty much, Catalyst also does) is a sort of attempt to squash that into the server side at which point your view is basically the sum of the templating system you’re using plus the browser’s rendering engine, and your controller is the sum of the browser’s dispatch of links and forms and the code that handles that server side. So, server side, you end up with the controller being the receiver for the HTTP request, which picks some model data and puts it in … usually some sort of unstructured bag. In Catalyst we have a hash attached to the request context called the stash. In Rails they use the controller’s instance attributes and then you hand that unstructured bag of model objects off to a template, which then renders it – this is your server-side view.

So, the request cycle for a traditional HTML rendering Catalyst app is:

  1. the request comes in
  2. the appropriate controller is selected
  3. Catalyst calls that the controller code, performs any required alterations to the model
  4. then tells the view to render a template name with a set of data

The fun part, of course, is that for things like REST APIs you tend to think in terms of serialize/deserialize rather than event->GUI change, so at that point the controller basically becomes “the request handler” and the view part becomes pretty much vestigial, because the work to translate that data into something to display to the user is done elsewhere, usually client side JavaScript (well, assuming the client is a user facing app at all, anyway).

So, in practice, a lot of stuff isn’t exactly MVC … but there’ve been so many variants and reinterpretations of the pattern over the years that above all it means to “keep the interaction flow, the business logic, and the display separate … somehow” which is clearly a good thing, and idiomatic catalyst code tends to do so. The usual rule of thumb is “if this logic could make sense in a different UI (e.g. a CLI script or a cron job), then it probably belongs inside the domain code that your web app regards as its model”; plus “keep the templates simple, and keep their interaction with the model readonly anything clever or mutating probably belongs in the controller”.

So you basically drive to push anything non-cosmetic out of the view, and then anything non-current-UI-specific out of the controller and the end result is at least approximately MVC for some of the definitions and ends up being decently maintainable

Can you swap template engines for the view as well as, at the backend, swap DBMS’s for the model?

Access to the models and views is built atop a fairly simple IOC system – inversion of control – so basically Catalyst loads and makes available whatever models and views are provided, and then the controller will ask Catalyst for the objects it needs. So the key thing is that a single view is responsible for a view onto the application; the templating engine is an implementation detail, in effect, and there are a bunch of view base classes that mean you don’t have to worry about that, but if you had an app with a main UI and an admin UI, you might decide to keep both those UIs within the same Catalyst application and have two views that use the same templating system but a completely different set of templates/HTML style/etc.

In terms of models, if you need support from your web framework to swap database backends, you’re doing something horribly wrong. The idea is that your domain model code is just something that exposes methods that the rest of the code uses – normally it doesn’t even live in the Catalyst model/ directory. In there are adapter classes that basically bolt external code into your application.

Because the domain code shouldn’t be web-specific in the first place you have some slightly more specialised adapters – notably Catalyst::model::DBIC::Schema which makes it easier to do a bunch of clever things involving DBIx::Class – but the DBIx::Class code, which is what talks to your database for you, is outside the scope of the Catalyst app itself.

The web application should be designed as an interface to the domain model which not only makes things a lot cleaner, but means that you can test your domain model code without needing to involve Catalyst at all. Running a full HTTP request cycle just to see if a web-independent calculation is implemented correctly is a waste of time, money and perfectly good electricity!

So, Catalyst isn’t so much DBMS-independent as domain-implementation-agnostic. There are catalyst apps that don’t even have a database, that manage, for example, LDAP trees, or serve files from disk (e.g. the app for our advent calendar). The model/view instantiation stuff is useful, but the crucial advantage is cultural. It’s not so much about explicitly building for pluggability as refusing to impose requirements on the domain code, at which point you don’t actually need to implement anything specific to be able to plug in pretty much whatever code is most appropriate. Sometimes opinion is really useful. Opinion about somebody else’s business logic, on the other hand, should in my experience be left to the domain experts rather than the web architect.

Catalyst also has a RESTfull interface. How is the URI mapped to an action?

The URI mapping works the same way it always does. Basically, you have methods that are each responsible for a chunk of the URL, so for a URL like /domain/example.com/user/bob you’d have basically a method per path element: the first one sets up any domain-generic stuff and the base collection, the second pulls the domain object out of the collection, then you go from there to a collection of users for that domain and pull the specific user. Catalyst’s chained dispatch system is basically entirely oriented around the URL space, drilling down through representations/entities anyway which is a key thing to do to achieve REST, but basically a good idea in terms of URL design anyway plus, because the core stuff is all about path matching, it becomes pretty natural to handle HTTP methods last. So there’s Catalyst::Action::REST and the core method matching stuff that makes it cleaner to do that, but basically they both just save you writing:

if (<GET request>) { ... } elsif (<POST request>) { ... } etc.

Of course you can do RESTful straight HTTP+HTML UIs, although personally I’ve found that style a little contrived in places. For APIs, though, the approach really shines but basically API code is – well, it’s going to be using a serializer/deserializer pair (usually JSON these days) instead of form parsing and a view – but apart from that, the writing of the logic stuff isn’t hugely different. RESTful isn’t really about a specific interface, it’s about how you use the capabilities available. But the URI mapping and request dispatch cycle is a very rich set of capabilities – and allow a bunch of places to fiddle with dispatch during the matching process. Catalyst::Action::REST basically hijacks the part where Catalyst would normally call a method and calls a method based on the HTTP method instead; so, say, instead of user you’d have user_GET called. There’s also Catalyst::Controller::DBIC::API which can provide a fairly complete REST-style JSON API onto your DBIx::Class object graph.

So again it’s not so much that we have specific support for something, but that the features provide mechanism and then the policy/patterns you implement using those are enabled rather than dictated by the tools. I think the point I’m trying to make is that REST is about methods as verbs and about entities as first class things so it implies good URL design … but you can do good URL design and not do the rest of REST, it’s just that they caught on about the same time.

What about plugging CPAN modules in? I understand that this is another showcase of Catalyst’s extensibility. Can any module be used, or must they adhere to a public interface of some sort?

There’s very little interface; for most classes, either your own or from CPAN, Catalyst::model::Adaptor can do the trick. There are three versions of that, which are:

  • call new once, during startup, and hang onto the object
  • call new no more than once per request, keeping the same object for the rest of the request once it’s been asked for
  • call new every time somebody asks for the object

The first one is probably most common, but it’s often nice to use the second approach so that your model can have a first class understanding of, for example, which user is currently logged in, if any, so that manages the lifecycle for you. Anything with a new method is going to work, which means any and all object-oriented stuff written according to convention in at least the past 10 years or so.

For anything else you break out Moose/Moo, and write yourself a quick normal class that wraps whatever crazy thing you’re using and now you’re back to it being easy (and you’ll probably find that class is more pleasant to use in all your code, anyway). Really, any attempt at automating the remaining cases would probably be more code to configure for whatever use-case you have than to just write the code to do it.

For example, sometimes you want a component that’s instantiated once, but then specialises itself as requested. A useful example would be “I want to keep the database connection persistent, but still have a concept of a current user to use to enforce restrictions on queries. So there’s a role called Catalyst::Component::InstancePerContext that provides that – instead of using the Adaptor’s per-request version, you use a normal adaptor, and use that role in the class it constructs and then that object will get a method called on it once per request, which can return the final model object to be used by the controller code. I’ve probably expended more characters describing it than any given implementation takes, because it’s really just the implementation of a couple of short methods and besides that, the most common case for that is DBIx::Class. There’s also a PerRequestSchema role shipped with Catalyst::model::DBIC::Schema (which is basically a DBIx::Class-specialised adaptor, remember) that reduces it to something like:

        sub _build_per_request_schema {
my ($self, $c) = @_;
$self->schema->restrict_with_object($c->user);
}

… but again the goal isn’t so much to have lots of full-featured integration code, but to minimise the need to write integration code in the first place.

In the forthcoming second part of the interview, we talk about the flexibility of Catalyst, its learning curve, Ruby on Rails, and the framework in Software Enginnering terms

 

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He is interested in anything related to the RDBMS and loves crafting complex SQL queries for generating reports. As a journalist, he writes articles, conducts interviews and reviews technical IT books

Git Reset

Git-ing Out Of Trouble

Git is a popular and powerful tool for managing source code, documentation, and really anything else made of text that you’d like to keep track of.  With great power comes quite a lot of complexity however, and it can be easy to get into a tangle using this tool.  With that in mind, I thought I’d share some tips for how to “undo” with git.

The closest thing to an undo command in git is git reset.  You can undo various levels of thing, right up to throwing away changes that are already in your history.  Let’s take a look at some examples, in order of severity.

 

Git Reset

Git reset without any additional arguments (technically it defaults to git reset –mixed) will simply unstage your changes.  The changes will still be there, the files won’t change, but the changes you had already staged for your next commit will no longer be staged.  Instead, you’ll see locally modified files.  This is useful if you realise that you need to commit only part of the local modifications; git reset lets you unstage everything without losing changes, and then stage the ones you want.

 

Git Reset –Hard

Using the –hard switch is more destructive.  This will discard all changes since the last commit, regardless of whether they were staged or not.  It’s relatively difficult to lose work in git, but this is an excellent way of achieving just that!  It’s very useful though when you realise you’ve gone off on a tangent or find yourself in a dead end, as git reset –hard will just put you back to where you were when you last committed.  Personally I find it helpful to commit before going for lunch, as immediately afterwards seems to be my peak time for tangents and I can then easily rescue myself.

 

Git Reset –Hard [Revision]

Using –hard with a specific SHA1 will throw away everything in your working copy including staging area, and the commits since the one you name here.  This is great if you’re regretting something, or have committed to the wrong branch (make a new branch from here, then use this technique on the existing branch to remove your accidental commits.  I do this one a lot too).  Use with caution though, if you have already pushed your branch to somewhere else, your next push will need to use the -f switch to force the push – and if anyone has pulled your changes, they are very likely to have problems so this isn’t a recommended approach for already-pushed changes.

Hopefully there are some tips there that will help you to get out of trouble in the unlikely event that you need them.  When things go wrong, stay calm and remember that it happens to the best of us!

Lorna's blog imageLorna Jane Mitchell is a web development consultant and trainer from Leeds in the UK, specialising in open source technologies, data-related problems, and APIs. She is also an open source project lead, regular conference speaker, prolific blogger, and author of PHP Web Services, published by O’Reilly.

 

Do you want to know more about Git?

Lorna will be giving a full day tutorial – ‘Git for Development Teams’ on Thursday 6th February 2014 in London. This tutorial is organized by FLOSS UK and O’Reilly UK Ltd. Click here for further details. Please note that the early bird rates are available until January 14th.

No YAPC !

I am suffering from a lack of Perl – maybe not the language but definitely the community. As far as I can remember, I have always been there – behind the O’Reilly table, filled with all the pale blue books. You guessed this year I am not going to YAPC as it is outside of my territory – that feels very strange and my Summer is incomplete. Come on Perl people get YAPC back in my part of the world! Since I cannot talk about YAPC, I will talk about two new (or not so new courses) that we have organized with Floss UK. These courses are not just about Perl but the tutor is the Perl Guru – Damian Conway.

I will not bore you with the details of the course, you can read them on the website –

Regular Expressions Training Course

Thursday 11th October 2012

Venue: Imperial Hotel, Russell Square, London WC1B 5BB

Presentation Skills Training Course

Friday 12th October 2012

Venue: Imperial Hotel, Russell Square, London WC1B 5BB

 

I thought I would just give you some examples of people’s feedback after they listened to Damian’s courses or talks. First quote is about the “Presentation Skills” course given in London in April (I know you’ve read this quote in a previous post but if the BBC can do repeats why can’t I):

“I attended Damian’s ‘Presentation Aikido’ yesterday. It’s the only time I’ve remained so engaged with a subject for 7-8 hours, and that includes school, university, OSCON and an all night session with Lord of the Rings. What’s even more impressive is that I remained engaged with a subject I normally find incredibly tedious. Like almost everyone else, my professional life has been blighted by three dimensional transitions, psychedelic colour schemes and psychotic presenters, not all of my own creation. Damian’s skill is to focus on the content and to cut the rubbish that can make presentations so unbearable. And that content is unparalleled. Using his hard-won experience as a speaker, he imbibes attendees with a very real sense of what it takes to make great presentations, and that’s the best possible outcome I could have hoped for.” – Graham Morrison, Linux Format

Then I ramdomly took some quotes from Damian’s website – please note for once I am not guilty for the typos etc.

“Damian is a witty, engaging and experienced speaker and it shows in his delivery style and the quality of his talk and slides. He makes the whole process of presenting look effortless. Of course, if you had attended his class on presentations you would know that he had already practiced his talk at least three times in the week before. That was an important lesson and it does help as it makes you confident and knowledgable about the flow of your talk. There were several other tricks that Damian gave in that presentation and if you get a chance to attend, I recommend you take it.” 

“Dr. Damian Conway is an extremely clever, creative, witty, and entertaining lecturer. He has been aptly characterized as a cross between Donald Knuth and Monty Python.”

“Damian is the most engaging and motivated instructor I’ve had the pleasure of seeing. I can’t wait to see what he has up his sleeve in the next twenty years!”

“Very knowledgeable (for obvious reasons) as well as congenial and animated. Made the subject quite fun and very enticing.”

“Informative and interesting; not boring. Lively speaker; no way to get any sleep in here.”

“Great pace/instruction. Damian is an excellent teacher; very creative and knowledgeable. The humor really made for a great presentation.”

“Damian is a top-notch instructor. I was extremely pleased. It’s good to be taught by leaders in the field!”

“Too much fun; should be declared illegal!”

I hope I have convinced you that these two courses are a must, if not have a look at Damian’s website where you will find more fantastic quotes.