Skip to content

All about Mojolicious – interview of Sebastian Riedel part 2

2014 December 3
by Nikos Vaggalis

mojo

 

NV: Does the dependency free nature of Mojolicious act as an invitation to people familiar with other frameworks (i.e. Ruby on Rails) and languages (i.e. PHP)? That aside, what other factors/features would lure such developers to the framework?

SR: The dependency free nature of Mojolicious is actually more of a myth, the truth is that installing Mojolicious is simply a very fast and pleasant experience.

One of the ways in which we’ve done this, is to make hard to install 3rd party modules like IO::Socket::SSL and Net::DNS::Native optional.

I think what makes Mojolicious special is best explained with an example application:

 use Mojolicious::Lite;
 use 5.20.0;
 use experimental 'signatures';

    # Render template "index.html.ep" from the DATA section
    get '/' => {template => 'index'};

    # WebSocket service used by the template to extract the title from a web site
    websocket '/title' => sub ($c) {
      $c->on(message => sub ($c, $msg) {
        my $title = $c->ua->get($msg)->res->dom->at('title')->text;
        $c->send($title);
      });
    };

    app->start;
    __DATA__

    @@ index.html.ep
    % my $url = url_for 'title';
    <script>
      var ws = new WebSocket('<%= $url->to_abs %>');
      ws.onmessage = function (event) { document.body.innerHTML += event.data };
      ws.onopen    = function (event) { ws.send('http://mojolicio.us') };
    </script>
   

This is actually the first example application you encounter on our website (http://mojolicio.us).
It doesn’t look very complicated at all. But once you start digging a little deeper, you’ll quickly realize how crazy (in a good way) it really is, and how hard it would be to replicate with any other web framework, in any language.

To give you a very quick summary:

  1. There’s an EP (Embedded Perl) template, in the DATA section of a single-file web application. That template generates an HTML file, containing JavaScript, which opens a WebSocket connection, to a dynamically generated URL (ws://127.0.0.1:3000/title), based on the name of a route.
  2. Then sends another URL (http://mojolicio.us) through the WebSocket as a text message, which results in a message event being emitted by the server.
  3. Our server then uses a full featured HTTP user agent, to issue a GET request to this URL, and uses an HTML DOM parser to extract the title from the resulting document with CSS selectors.
  4. Before finally returning it through the WebSocket to the browser, which then displays it in the body of our original HTML file.

Next year at Mojoconf 2015, I’ll be giving a whole talk about this one application, exploring it in much greater detail.

NV: It’s a framework that you use in pure Perl. Why not go for a DSL like Dancer does?

SR: There are actually two kinds of web framework DSLs, and they differ by scope.

First, you have your routing DSL, which usually runs during server start-up and modifies application state. (application scope)

    get '/foo' => sub {...};

Second, there is what I would call the content generation DSL, which modifies request/response state. (request scope)

    get '/foo' => sub {
      header 'Content-Type' => 'text/plain';
      render 'some_template';
    };

Mojolicious does have the first kind, and we’ve already used it in the examples above, but not the second. And the reason for this, is that the second kind does not work very well, when you plan on handling multiple requests concurrently in the same process, which involves a lot of context switching. It’s a trade-off between making your framework more approachable for beginners, that might not know Object-Oriented Perl yet, and supporting modern real-time web features.

Which object system is Mojolicious using and which can I use in my code?

Mojolicious uses plain old hash-based Perl objects, and we take special care to allow for Moose and Moo to be used in applications as well.

NV: With Dancer you can easily integrate jQuery and Bootstrap with the templating system. How does Mojolicious approach this integration?

Mojolicious is completely JavaScript/HTML/CSS framework agnostic, and will work with all of them. Some frameworks, including jQuery and Bootstrap, do have plugins on CPAN, but we don’t discriminate.

NV: Mojolicious vs Mojolicious::Lite. When to use each?

SR: I usually start exploring ideas with a single-file Mojolicious::Lite prototype, like we’ve seen above, and slowly grow it into a well-structured Mojolicious web application, which looks more like your average CPAN distribution.

This is a rather simple process, because Mojolicious::Lite is only a tiny wrapper around Mojolicious, and both share like 99% of the same code.

NV: What can we expect in the future and what is the greater vision for the project’s evolution?

Mojolicious has an amazing community, and I hope we can expand on that to reach more people from outside the Perl community in the future. Not a day goes by where I don’t receive requests for a Mojolicious book, so that’s a pretty big priority too.

Feature wise, with the release of the final RFC, I expect HTTP/2 to be a very big topic in 2015.
And hopefully we will get to play more with new Perl features such as signatures, I can’t wait for a polyfill CPAN module to bring signatures to older versions of Perl.

NV: Finally, prompted by the news that Perl 6 will officially launch for production use by 2015, I’d like to hear your thoughts on Perl 6 and if it could or would be used, with or as part, of Mojolicious.

SR: I’ve had the pleasure to see Jonathan Worthington talk about concurrency and parallelism in Perl6 at Mojoconf 2014, and to say that it was quite inspiring would be an understatement.

But “production use” can mean a lot of different things to a lot of people. Is it feature complete? Is it as fast as Perl5? Would you bet the future of your company on it?

I love Perl6 the language, it solves all the problems I have with Perl5, and if there’s an implementation that’s good enough, you bet there will be a Mojolicious port!


nikos1
Nikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programs in both Perl and C#. As a journalist he writes articles, conducts interviews and reviews technical IT books

All about Mojolicious – interview of Sebastian Riedel part 1

2014 November 26
by Nikos Vaggalis

mojo

 

 

 

 

Our journey into the world of Perl’s Web frameworks would not be complete without Mojolicious, therefore we talked to Sebastian Riedel, Mojolicious’ mastermind and Catalyst’s original founder.

We looked at Mojolicious’ history: why Sebastian left Catalyst for Mojolicious, present: what does the framework actually do, the project’s future: Sebastian’s long-term plans, and whether Perl6 will have an effect on the project. We also get more technical with questions like why not opt for a DSL like Dancer does, what is meant by ‘real time web framework’, whether the framework is dependency free and much more.

NV: Do you think that now with Mojolicious, Catalyst and Dancer we are experiencing Perl’s newest and most successful Web revolution since the 90’s?

SR: It’s certainly a great time for web development with Perl, and it has been a lot of fun seeing Catalyst and Mojolicious take the Perl community by storm.

But for a real revolution, along the lines of CGI.pm in the late 90s, I think we have to get a lot better at reaching people outside the echo chamber, which is not really a technical, but a public relations problem.

NV: Why did you leave Catalyst for Mojolicious?

SR: Creative differences. At the time I was still experimenting a lot with new ideas for Catalyst, many of which are now part of Mojolicious, but what the majority of core team members really wanted was stability.

So rather than risk harming Catalyst with a drawn-out fight, I decided to leave for a fresh start.
It was very disappointing at the time, but the right decision in retrospect.

NV: Is Mojolicious a MVC framework and if so how does it implement the MVC pattern?

Yes, it is very similar to Catalyst in that regard. But we don’t highlight the fact very much, Model-View-Controller is just one of many design patterns we use.

Out of the box, Mojolicious is Model layer agnostic, and we consider web applications simply frontends to existing business logic. Controllers are plain old Perl classes, connecting this existing business logic to a View, which would usually be EP (Embedded Perl), or one of the many other template systems we support.

NV: There are many web frameworks out there each one targeting various areas of web development. Which ones does Mojolicious address and how does it compare to Dancer and Catalyst?

SR: There was a time when I would jump at every opportunity to learn more about all the different web frameworks out there, searching for inspiration.

But these days there’s actually very little innovation happening, almost all server-side frameworks follow the same basic formula.Some are trying to optimize for projects of a certain size, but it’s mostly just programming languages competing now.

So I only really pay attention to a very small number that is still experimenting with different architectures and technologies, usually written in languages other than Perl. Some recent examples would be Meteor (JavaScript) and Play Framework (Scala).

What’s really important to me with Mojolicious, and what I believe really differentiates it from everything else in Perl, is that we are always trying new things. Like we’ve done with tight integration of WebSockets and event loops, or the ability to grow from single-file prototypes to well structured web applications.

NV: What is the term ‘real time web framework’ referring to? To the capability of doing WebSockets, non-blocking I/O and event loops? Can you walk us through these concepts? Do such features make Mojolicious a prime platform for building Web APIs?

SR: The real-time web is simply a collection of technologies that allow content to be pushed to consumers with long-lived connections, as soon as it is generated. One of these technologies is the WebSocket protocol, offering full bi-directional low-latency communication channels between the browser and your web server.

I’m not actually a big fan of non-blocking I/O and event loops, but they are the tools that allow us to scale.So a single WebSocket server process can handle thousands of connections concurrently.

Sounds rather complicated, doesn’t it? But with Mojolicious all you need are these 9 lines of Perl code, and you have a fully functioning real-time web application :

   use Mojolicious::Lite;
    use 5.20.0;
    use experimental 'signatures';

    websocket '/echo' => sub ($c) {
      $c->on(message => sub ($c, $msg) {
        $c->send("echo: $msg");
      });
    };

    app->start;

You tell me if this makes Mojolicious a “prime platform for building Web APIs”. :)

NV: How easy is to extend the framework with plugins and what are some of the most useful one?

Mojolicious has 415 reverse dependencies on MetaCPAN, so i’d say it is pretty easy to extend. While there are many many good ones, I have a weak spot for Mojolicious::Plugin::AssetPack, which takes care of all those annoying little asset management tasks, like minifying JavaScript and CSS files.

Mojolicious::Plugin::I18N and Mojolicious::Plugin::Mail are also very commonly used, and I guess I should mention my own Mojolicious::Plugin::Minion, which is a job queue designed specifically for Mojolicious.

In the next and final part, Sebastian shares his views on DSL’s, the project’s dependability and the upcoming release of Perl 6 for production use among others.


nikos1
Nikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programs in both Perl and C#. As a journalist he writes articles, conducts interviews and reviews technical IT books

Saving money with Open Source software

2014 November 18

“In the end, Postgres looks to me like it’s saving us like 5X in hardware costs as we continue to grow.”

This comment  was published on Redit about an article that compares PostgreSQL with MS SQL Server. I will not join the raging battle between the two clans and I am leaving the field to those who know best.

This article made me rethink the reasons why I like Open Source.

Why do I like Open Source?

This might be a very romantic view but I still believe people are good at working together for the benefit of our society.  So what is Open Source? Open Source software is software whose code can be modified or enhanced by anyone.

Born from a grassroots movement, Open Source brings:

  • Collaboration between people who may never meet but have the same vision
  • Delivery of lower cost products as there are no big companies or shareholders behind the projects
  • Strong motivation from individuals who have a huge interest in writing code, making other members of the community enthusiastic about their projects
  • Flexibility as individuals make improvements which are then made available to the public

For all these reasons and a lot more, PostgreSQL is a good example of Open Source software and its movement. For example, 2ndQuadrant employees are significant contributors to the development of PostgreSQL with many of the features found in the current version developed by their people. Their latest addition is BDR (Bi-directional replication), an extension to PostgreSQL, free and Open Source which will be integrated to future versions of PostgreSQL.

Will Open Source saves us money? That I suppose will be the question for many years to come?

Bi-Directional Replication or Asynchronous multi-master replication for PostgreSQL

2014 July 11

As mentioned in my last post, I went to CHAR(14). There I learnt about Bi-Directional Replication (BDR).  BDR is an asynchronous multi-master replication system for PostgreSQL, specifically designed to allow geographically distributed clusters. Supporting up to 48 nodes (and possibly more in future releases) BDR is a low overhead, low maintenance technology for distributed databases.

BDR Basic Schema3_display

BDR is created by 2ndQuadrant and is the first of several replication technologies the company will announce this year to dramatically enhance PostgreSQL. Features of BDR have, and will continue to be moved into future releases of PostgreSQL. It is well-known that 2ndQuadrant develop code that all users of PostgreSQL can benefit from. The company has a long history of advancing the development of PostgreSQL, and specifically is accredited with making improvements to replication techniques.

2ndQuadrant’s CTO, Simon Riggs, commented: “BDR is a major enhancement to replication design and, with up to 48 master nodes supported, it offers a significant opportunity to reduce the overhead and headaches experienced with previous approaches to replication. For any organisation with distributed PostgreSQL databases replicated across multiple master nodes, BDR should be seriously considered.”

BDR is available as open source software, direct from 2ndQuadrant, with consultancy and support contracts available to ensure users can successfully design and implement a stable replicated environment. 2ndQuadrant’s Production Support service provides direct help from the development team behind BDR.

The company has been working with a number of early adopter clients, including BullionByPost®, and a leading antivirus software developer, to fine tune and evaluate BDR in demanding environments, ahead of this announcement.

I forgot to say that BDR, an extension to PostgreSQL, is free and open source; licensed under the same terms as PostgreSQL. PostgreSQL is released under the PostgreSQL License, a liberal Open Source license, similar to the BSD or MIT licenses.

You can get a lot more information on the website –

There you can also describe your replication requirements or sign up for the quarterly Newsletter.

 

From books to software

2014 June 18
by Josette Garcia

logo_color-(JPEG)_display2I have some exciting news to share – I have joined 2ndQuadrant as Community Manager. It is a rather different role for me even though I have been working with the Open Source community for years – Perl, Python, PHP, Linux have no secrets but then why am I so nervous?  Could the reason be that a book is a very physical thing and even though I did not understand the content, I could always grab one and read the preface or about the animal on the cover. Software for a non-techie is rather elusive, full of mysteries –can’t see anything, can’t touch anything but I am sure I will get used to it.

Why 2ndQuadrant?

  • You guessed 2ndQuadrant work with the Open Source community and provide lots of add-ins to PostgreSQL which are free for everyone to work on.
  • I like the philosophy of 2ndQuadrant as a big part of the revenue is ploughed back into the community.
  • It is a worldwide company with offices in US, UK, France, Italy, Germany, Nordic region and South America.
  • Collectively 2ndQuadrant provide over 100 man years of PostgreSQL expertise in all areas of the Database Lifecycle Management process.

First days out

I will be attending the following conferences:

  • CHARChar(14), Milton Keynes, 8th July – for anybody interested in  Clustering, High Availability and Replication and much more. 2ndQuadrant and Translattice will be presenting the important new technologies of BiDirectional Replication (BDR). BDR will be explained in more details offering you the first opportunity to understand how it works and understand what implementation options are available – directly from the developers.
  • PG Day LogoPGDay, Milton Keynes, 9th July – With talks on PostgreSQL features in 9.4, Migrating rant & rave to PostgreSQL, Business Intelligence with Window Functions, New BI features from the AXLE project and much more, PGDay focuses on bringing PostgreSQL users to a whole new level of understanding. It will cover the core topics you need to be successful with PostgreSQL and will give you the opportunity to network with fellow users.

“My community”

If the members of your User Group, colleagues from your Company or Institution are interested in PostgreSQL, please let me know as I would like to build a bigger, stronger community around this database.  Also please let me know of any conferences, events we should attend or anything else I should look into.  You will find my email on the “About Me” page.

All about Dancer – interview of Sawyer X part 3 and last

2014 May 14
by Nikos Vaggalis

dancer-logoNV: Dancer and web services, where do I start ? Conceptually, is REST only about web services ?

SX: While REST (REpresentational State Transfer) is not limited to web services, it’s most widely used in that context. It’s a declared structure with which you can define and provide a consistent and understandable web service.

As Dancer tries to be opinionated in moderation, it provides a plugin (Dancer::Plugin::REST and Dancer2::Plugin::REST) to help you go about defining RESTful routes. It helps you to easily define resources in your application so you get serialization and routes easily set up.

Sometimes it’s hard for me to get used to new tools, so I haven’t used that plugin yet. I generally define my own paths for everything. While I suggest you take a look at it, I should probably do the same.

NV: What’s in the project’s wish-list, where is it heading at, and what can we expect in the future?

SX: We’re focusing on several issues at the moment, which all seem to be congruent with each other: transition users to Dancer 2, overhaul the documentation, improve our test coverage, further decouple internal entities, streamline our coordination, and strip code further.

We’ve made a lot of progress recently, much of it thanks to new core members, and more corporate companies (such as Booking.com) sponsoring hackathons, allowing us to focus more time on these. The attention we receive from our community is invigorating, and pushes us to continue work on the project, and invest time in it. It gives us an insight on how worthwhile it really is, and it makes our work a pleasure.

NV: Perl vs PHP vs Ruby vs language X, for the web. Why Perl has fallen out of favour with web devs and what can be done about it?

SX: While I have been working with Perl for a long while, and started back when CGI was pretty much it, others have much more experience, and might be able to answer this question better than I can. This is my rough reasoning, and I may be completely off on this.

I believe the downfall of Perl as the dominating web language was due to our apathy at the time. As soon as we ruled the web with CGI we were lulled into a false sense of security in that position. In the mean time, other languages were trying to get their bearings and come up with something that could compete with it. It took some time, but they came up with better things, while we pretty much stalled behind.

WSGI was done in Python. Then Ruby’s Rack came around. It took some time until we realized those were good and we should have that too, finally provided by Miyagawa as PSGI/Plack. Now our problem is that a lot of people are still not moving onwards to it, and are stuck with arcane methods of web programming, not understanding the value of moving forward.

It’s important to remember that no single language can truly “rule” the web anyway. Perl had its glory days, and they are over. Then PHP had its, and it was over as soon as people realized PHP is not even a real language, and so happened with Ruby and with Rails. Others will have their turn for 15 minutes of fame, and that will be over as well. We will eventually end up with multiple languages (and PHP) and a multitude of web programming abilities, which is a bright future for all of us – except those who will have to work with PHP, of course.

The crucial bit is not to stay behind the curve on new developments, and to push to create new things where appropriate. We shouldn’t just relax with what we have, we should always demand more of ourselves and try and create it, and not wait for other languages to say “hey, this sucks, let’s try fixing it”. That’s what we’re known for, so let’s get back to that.

NV: Your “CGI.pm must die” talk has gone viral. Is CGI.pm really that bad ?

SX: CGI.pm wasn’t the module we deserved, but the module we needed. At the time, it was the best thing that happened for Perl and for the web. However, those days had passed. Unfortunately, while Perl evolved, some people stayed at the decade of CGI.pm. We won’t reach far if we’re still sticking to the best and latest of 1995. Some of us are quite literally stuck in the previous century, it’s not even funny. Well, it is a bit. It’s also sad.

People often ask me “is CGI.pm that horrible?” and the answer is that, in all honesty, yes, it really is! But that’s not why I go completely apeshit about it. If I may quote a great poet, “it’s about sending a message”. If I would have given a talk entitled “use PSGI/Plack”, it wouldn’t stick as much as suggesting to kill a module with fire, now would it?

We should all be thankful to Lincoln D. Stein who gave us CGI.pm, and now retire it in favor of PSGI/Plack. I had received an email from Lincoln saying he watched the talk I gave, enjoyed it (which was a great honor for me), and fully agrees with moving forward. And while we’re moving onwards to bigger and better, we should check out the new stuff Lincoln has been working on, like his VM::EC2 module.

NV: Would you someday consider switching from Perl 5 to Perl 6? If so, what are your thoughts on Perl 6 and given the opportunity, would you someday re-write Dancer in it?

SX: I would love a chance to work with Perl 6 in a professional capacity, but I don’t see it in the near future. It’s a shame, because, in my opinion, it’s by far the most innovative and interesting language available today.

We’ve all been hoping Perl 6 will hit the ground running, and it took some time to realize it isn’t that simple. However, nowadays Perl 6 interpreters have been releasing regularly, and there’s work being done to get Perl 5 and Perl 6 closer, both community-wise and technically-wise.

Some amazing ideas came out of Perl 6, some of which were ported to Perl 5, some of which successfully. When it comes to language features and ability, Perl 6 has done a lot of right, even though it also made several mistakes. Hindsight is 20/20, and if we could go back, things would have been done differently. All in all, I think it’s best for us all to concentrate on the current state and the future – and those look bright.

I will likely not have to rewrite Dancer in Perl 6 because a bare-bones Dancer port has already been written by tadzik (Tadeusz Sosnierz) called Bailador. I haven’t looked at the internals too much, so I’m not sure if the design flaws we had to deal with exist there too. Either way, I’m certain it’s in good hands, and I hope that in the future I will be able to contribute to that.

I just want to add one last note, if I may. I want to thank our community, who push us closer together, while pushing us to work harder at the same time. It’s a great joy and delight. And I want to also thank the core team for being a wonderful gang to work with, each and every single one. And I’d like to thank you, for giving me the opportunity to talk about Perl and Dancer.

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programs in both Perl and C#. He writes articles, conducts interviews and reviews technical IT books

All about Dancer – interview of Sawyer X part 2

2014 May 2
by Nikos Vaggalis

NV: Is Dancer 2 a complete re-write and why? what shortcomings of the first version does it address ?

SX:Dancer 2 is indeed a complete rewrite of the code, and for good reason.

Dancer started as a fork of a fun web framework in Ruby called Sinatra. The founder of Dancer, Alexis Sukrieh, being a polyglot (programming in both Perl and Ruby), had used Sinatra, and wished to have a similar framework in Perl.

As Dancer evolved through its users and community, gaining numerous additional features, it became apparent that some of the original design decisions regarding architecture were a problem. Specifically, engines, which are the core tenants of Dancer, are all singletons. This means every Dancer subsystem (and consequently, every code you write in Dancer in the same process) will share the same engine.

An interesting example is the serializer. This means that one piece of code in which you want automatic serialization would require all your other pieces of code to work with serialization. You cannot control that.

When we realized we could not force Dancer to do the right thing when it came to engines, we resorted to rewriting from scratch. This allowed several improvements: using a modern object system (Moo), having contextual engines and DSL, and decoupled mini-applications, called Dancer apps.

NV:There is a lot of talk on Plack/PSGI. What is it and what is the advantage of hooking into it ?

SX:PSGI is a protocol, which means it’s literally a paper describing how a server and application should speak to each other. It includes the parameters each expects and how they should respond. In essence, it’s an RFC for a new standardized protocol.

Plack is a set of tools for writing PSGI applications and servers. We can use Plack as reference implementation, or as actual utilities for working with any layer of the PSGI stack, whether it’s the server or the client.

PSGI, inspired by Python’s WSGI and Ruby’s Rack, has many qualities which rendered it an instant success: it’s simple, understandable, works across web servers, includes advanced features such as application mounting, middlewares, long polling requests, and even asynchronous responses.

This deemed Plack/PSGI a solid foundation for writing web servers and web frameworks. All major web frameworks in Perl support PSGI, and many web servers started appearing, offering ranges of features from pre-forking to event-based serving: Starman, Twiggy, Starlet, Feersum, and many more.

NV:What functionality do I get out of the box and what tasks does Dancer take care of for me so I don’t have to? For example does it include measures of preventing XSS attacks or SQL injection? Or implementing a variety of authentication schemes?

SX:Dancer attempts to be a thin layer over your application code. It provides keywords to help you navigate through the web of… web programming. :)

If you want to define dispatching for your application paths, these are our routes. If you want to check for sessions, we have syntax to store them and retrieve them across different session storages. If you need to set cookies, we have easy syntax for that.

The idea with Dancer is that it gives you all the small bits and pieces to hook up your application to your web users, and then it vanishes in the background and stays out of your way.

We make an effort of making sure we provide you with code that is flexible, performant, and secure. We take security patches seriously, and our code is publicly available, so security audits are more than welcome.

NV:Plugins and extensibility. How easy is to extend the DSL, consume CPAN modules, hook plugins into it ? What are some of the most useful plugins that I can choose from? (engines for template,session,authentication,dbms etc)

SX:When you call “use Dancer2″ in order to write your web code, DSL is generated explicitly for your scope. It can be different than another scope. The reason for this is so you could use modules that extend that DSL. This is how plugins work.

It’s very important to note that we promote using Plack middlewares (available under the Plack::Middleware class), so your code can work across multiple web frameworks. Still, there are quite a few Dancer plugins to achieve various tasks through Dancer.

There is a list of recommended modules in Task::Dancer and here are a few I would recommend checking out:

  • Dancer::Plugin::REST – Writing RESTful apps quickly
  • Dancer::Plugin::Database – Easy database connections
  • Dancer::Plugin::Email – Integrate email sending
  • Dancer::Plugin::NYTProf – Easy profiling for Dancer using NYTProf
  • Dancer::Plugin::SiteMap – Automatically generate a sitemap
  • Dancer::Plugin::Auth::Tiny – Authentication done right
  • Dancer::Plugin::Cache::CHI – CHI-based caching for Dancer

NV:What about dependencies to third party components? is it lightly or heavily affected?

SX:I love this question, because it allows me to talk about our lovely community.

We try to be community-oriented. Our community is what fuelled Dancer from a simple web dispatching piece of code to a competitor for full-fledged production websites and products.

The original assumption with Dancer was that dependencies are a problem. While it is possible to reinvent the wheel, it comes at a cost. We tried to balance it out by having as few dependencies as possible, and reinventing where needed.

With time, however, the community expressed a completely different view of the matter. People said, “we don’t give a damn about dependencies. If we can install Dancer, we can install dependencies.”

By the time Dancer 2 came around, we already had so many ways to install dependencies in Perl, that it really wasn’t a problem anymore. We have great projects like FatPacker, local::lib, carton, Pinto, and more. This allowed us to remove a lot of redundant code in Dancer, to make it faster, easier to handle, more predictable, and even add features. The community was very favorable to that, and we’re happy we made that decision.

So our current approach is “if we need a dependency, we’ll add it”. Last release, actually, we removed a dependency. We just didn’t need it. Our current stack is still relatively small. I think we have a balance, and we’re always open to more feedback from the community about this.

I’ll take every chance to talk about how the community is driving the project. :)

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programs in both Perl and C#. He writes articles, conducts interviews and reviews technical IT books

All about Dancer – interview of Sawyer X

2014 April 25
by Nikos Vaggalis

dancer-logoAfter we looked into Catalyst, we continue our exploration of Perl’s land of Web Frameworks, with Dancer.

We talk about it to one of the core devs, Sawyer X, who kindly answered our questions in a very detailed and explanatory way, rendering the interview enjoyable and comprehensible even by non techies.

The interview, which spans three parts (to be published weekly), did not stop there however; we also asked his opinion and knowledge on finer grained aspects of the craft that is developing for the Web, such as what the advantage of routing over query strings is, MVC vs Routes, Perl vs PHP vs Ruby, and why CGI.pm must die!

NV:The term might be overloaded, but is Dancer a MVC or should I say a “Route” based framework ? what’s the difference?

SX:Usually MVC frameworks are defined by having a clear distinction between the model (your data source, usually a database), the view (your display, usually a template), and the controller (your application code).

While Dancer helps you to easily separate them (such as providing a separate directory for your templates called “views”, by default), it doesn’t go as far as some other frameworks in how much it confines you to those conventions.

I like to describe Dancer as “MVC-ish”, meaning it has a very clear notion of MVC, but it isn’t restrictive. If you’re used to MVC, you will feel at home. If you’re not, or don’t wish to have full-fledged MVC separation, you aren’t forced to have such either.

Most web frameworks use – what I would call – namespace-matching, in which the path you take has an implicit translation to a directory structure, a function, and the optional arguments. For example, the path /admin/shop/product/42 implicitly states it would (most likely) go to a file called Admin/Shop.pm, to a function called product, and will provide the parameter 42 to the function. The implicit translation here is from the URL /admin/shop/product/42 to the namespace Admin::Shop::product, and 42 as the function parameter.

Route-based frameworks declare explicitly what code to run on which path. You basically declare /admin/shop/product/$some_id will run a piece of code. Literally that is it. There is no magic to describe since there is no translation that needs to be understood.

NV:What is the advantage of routing and why the traditional model of query strings is not enough?

SX:The route-based matching provides several essential differences:

  • It is very simple for a developer to understand
  • It takes the user perspective: if the user goes to this path, run this code
  • It stays smaller since it doesn’t require any specific structure for an implicit translation to work, unlike namespace-matching

The declarative nature of route-based frameworks are quite succinct and dictate which code to run when. As explained, you are not confined to any structure. You can just settle for a single line:

get ‘/admin/shop/product/:id’ => sub {…}

This provides a lot of information in a one line of code. No additional files, no hierarchy. It indicates we’re expecting GET requests which will go to /admin/shop/product/:id. The :id is an indication that this should be a variable (such as a number or name), and we want it to be assigned the name id so we could then use it directly. When that path is reached, we will run that subroutine. It really is that simple. We could reach that variable using a Dancer keyword, specifically param->{‘id’}.

NV:Dancer is a complete feature-rich DSL. Does this mean that I write code in a dedicated language and not Perl?

SX:All the web layer done through Dancer can be done in the DSL, but the rest of your application is done in Perl. Dancer just provides a comfortable, clean, beautiful language to define your web layer and to add parts to it (like cookies, different replies, template rendering, etc.). In Dancer2 the DSL is built atop a clean object-oriented interface and provides nice keywords to it.

That is really what a web framework should do: provide a nice clean layer on top of your application.

I guess a better way would be to say you write your code in Perl with dedicated functions that look nicer. :)

NV:There are many web frameworks out there each one targeting various “problem” areas of web development. Which ones does Dancer address?

Dancer provides a sane thin layer for writing stable websites. It introduces relatively few dependencies, but doesn’t reinvent everything. It uses sane defaults and introduces basic recommended conventions, but isn’t too opinionated and remains flexible in the face of a multitude of use cases.

NV:What about the other Perl frameworks, Catalyst and Mojolicious? How do they compare to Dancer?

SX:Catalyst, as great a framework as it is, is pretty big. It uses an enormous amount of modules and clearly very opinionated. This is not necessarily a bad thing, but it might not be what you’re interested in.

Mojolicious pushes the boundaries of HTML5 programming, proving all the eye-candy features HackerNews buzzes about, and is very successful at that.

Dancer fills in the niche between those. It provides a stable base for your website. It does not depend on as many modules, but it does not reinvent every single wheel in existence. It’s the “oh my god, this is how my production websites now look like!” call of gleeful cheer. :)

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programs in both Perl and C#. He writes articles, conducts interviews and reviews technical IT books

C# Guru – An Interview With Eric Lippert

2014 April 22
by Nikos Vaggalis

Eric Lippert’s name is synonymous with C#. Having been Principal Developer at Microsoft on the C# compiler team and a member of the C# language design team he now works on C# analysis at Coverity.

If you know C# then the name Eric Lippert will be synonymous with clear explanations of difficult ideas and insights into the way languages work and are put together.

Here we host an overall summary of the highlights of the interview ranging over topics as diverse as the future of C#, asynchronous v parallel, Visual Basic and more (the link to the full interview on i-programmer can be found at the end of this page), so read on because you will surely find something to interest you about C#, languages in general or just where things are heading.

NV : So Eric, after so many years at Microsoft you began a new career at Coverity. Was the ‘context switch’ easy?

EL : Yes and no. Some parts of it were very easy and some took some getting used to.

For example, re-learning how to use Unix-based development tools, which I had not touched since the early 1990s, took me a while. Git is very different than Team Foundation Studio. And so on. But some things were quite straightforward.

Coverity’s attitude towards static analysis is very similar to the attitude that the C# compiler team has about compiler warnings, for instance. Though of course the conditions that Coverity is checking for are by their nature much more complicated than the heuristics that the C# compiler uses for warnings.

Switching from taking a bus to downtown every day instead of taking a bus to Redmond every day was probably the easiest part!

NV: I guess that from now on you’ll be working on the field of static analysis. What exactly does static analysis do?

EL: Static analysis is a very broad field in both industry and academia. So let me first start very wide, and then narrow that down to what we do at Coverity.

Static analysis is analysis of programs based solely on their source code or, if the source code is not available, their compiled binary form. That is in contrast with dynamic analysis, which analyses program behavior by watching the program run. So a profiler would be an example of dynamic analysis; it looks at the program as it is running and discovers facts about its performance, say.

Any analysis you perform just by looking at the source code is static analysis. So for example, compiler errors are static analysis; the error was determined by looking at the source code.

So now let’s get a bit more focused. There are lots of reasons to perform static analysis, but the one we are focused on is the discovery of program defects. That is still very broad. Consider a defect such as “this public method violates the Microsoft naming guidelines”. That’s certainly a defect. You might not consider that a particularly interesting or important defect, but it’s a defect.

Coverity is interested in discovering a very particular class of defect.
That is, defects that would result in a bug that could realistically affect a user of the software. We’re looking for genuine “I’m-glad-we-found-that-before-we-shipped-and-a-customer-lost-all-their-work” sort of bugs. Something like a badly named method that the customer is never going to notice.

NV: Do Code contracts play a role, and will the introduction of Roslyn affect the field of static analysis?

EL: Let me split that up into two questions. First, code contracts.
So as you surely know, code contracts are annotations that you can optionally put into your C# source code that allow you to express the pre-condition and post-condition invariants about your code. So then the question is, how do these contracts affect the static analysis that Coverity does? We have some support for understanding code contracts, but we could do better and one of my goals is to do some research on this for future versions.

One of the hard things about static analysis is the number of possible program states and the number of possible code paths through the program is extremely large, which can make analysis very time consuming. So one of the things we do is try to eliminate false paths — that is, code paths that we believe are logically impossible, and therefore do not have to be checked for defects. We can use code contracts to help us prune false paths.

A simple example would be if a contract says that a precondition of the method is that the first argument is a non-null string, and that argument is passed to another method, and the second method checks the argument to see if it is null. We can know that on that path – that is, via the first method – the path where the null check says “yes it is null” is a false path. We can then prune that false path and not consider it further. This has two main effects. The first is, as I said before, we get a significant performance gain by pruning away as many false paths as possible. Second, a false positive is when the tool reports a defect but does so incorrectly. Eliminating false paths greatly decreases the number of false positives. So we do some fairly basic consumption of information from code contracts, but we could likely do even more.

Now to address your second question, about Roslyn. Let me first answer the question very broadly. Throughout the industry, will Roslyn affect static analysis of C#? Absolutely yes, that is its reason for existing.

When I was at Microsoft I saw so many people write their own little C# parsers or IDEs or little mini compilers or whatever, for their own purposes. That’s very difficult, it’s time-consuming, it’s expensive, and it’s almost impossible to do right. Roslyn changes all that, by giving everyone a library of analysis tools for C# and VB which is correct, very fast, and designed specifically to make tool builder’s lives better.
I am very excited that it is almost done! I worked on it for many years and can’t wait to get my hands on the release version.

More specifically, will Roslyn affect static analysis at Coverity? We very much hope so. We work closely with my former colleagues on the Roslyn team. The exact implementation details of the Coverity C# static analyzer are of course not super-interesting to customers, so long as it works. And the exact date Roslyn will be available is not announced.

So any speculation as to when there will be a Coverity static analyzer that uses Roslyn as its front end is just that — speculative. Suffice to say that we’re actively looking into the possibility.

NV: What other possibilities does Roslyn give rise to? Extending the language, macros/mutable grammars, Javascript like Eval, REPL?

EL: Some of those more than others.
Let me start by taking a step back and reiterating what Roslyn is, and is not. Roslyn is a class library usable from C#, VB or other managed languages.Its purpose is to enable analysis of C# and VB code. The plan is for future versions of the C# and VB compilers and IDEs in Visual Studio to themselves use Roslyn.

So typical tasks you could perform with Roslyn would be things like:

  • “Find all usages of a particular method in this source code”
  • “Take this source code and give me the lexical and grammatical analysis”
  • “Tell me all the places this variable is written to inside this block”

Let me quickly say what it is not. It is not a mechanism for customers to themselves extend the C# or VB languages; it is a mechanism for analyzing the existing languages. Roslyn will make it easier for Microsoft to extend the C# and VB languages, because its architecture has been designed with that in mind. But it was not designed as an extensibility service for the language itself.

You mentioned a REPL. That is a Read-Eval-Print Loop, which is the classic way you interface with languages like Scheme. Since the Roslyn team was going to be re-architecting the compiler anyway they put in some features that would make it easier to develop REPL-like functionality in Visual Studio. Having left the team, I don’t know what the status is of that particular feature, so I probably ought not to comment on it further.

One of the principle scenarios that Roslyn was designed for is to make it much easier for third parties to develop refactorings. You’ve probably seen in Visual Studio that there is a refactoring menu and you can do things like “extract this code to a method” and so on.
Any of those refactorings, and a lot more, could be built using Roslyn.

As for if there will be an eval-like facility for spitting fresh code at runtime, like there is in JavaScript, the answer is sort of. I worked on JavaScript in the late 1990s, including the JScript.NET langauge that never really went anywhere, so I have no small experience in building implementations of JS “eval”. It is very hard. JavaScript is a very dynamic language; you can do things like introduce new local variables in “eval” code.

There is to my knowledge no plan for that sort of very dynamic feature in C#. However, there are things you can do to solve the simpler problem of generating fresh code at runtime. The CLR of course already has Reflection Emit. At a higher level, C# 3.0 added expression trees. Expression trees allow you to build a tree representing a C# or VB expression at runtime, and then compile that expression into a little method. The IL is generated for you automatically.

If you are analysing source code with Roslyn then there is I believe a facility for asking Roslyn “suppose I inserted this source code at this point in this program — how would you analyze the new code?”

And if at runtime you started up Roslyn and said “here’s a bunch of source code, can you give me a compiled assembly?” then of course Roslyn could do that. If someone wanted to build a little expression evaluator that used Roslyn as a lightweight code generator, I think that would be possible, but I’ve never tried it.

It seems like a good experiment. Maybe I’ll try to do that.

NV:Although, the TPL and async/await were great additions to both C# and the framework, they were also cause of a lot of commotion, generating more questions than answers:

What’s the difference between Asynchrony and Parallelism?

EL: Great question. Parallelism is one technique for achieving asynchrony, but asynchrony does not necessarily imply parallelism.

An asynchronous situation is one where there is some latency between a request being made and the result being delivered, such that you can continue to process work while you are waiting. Parallelism is a technique for achieving asynchrony, by hiring workers – threads – that each do tasks synchronously but in parallel.

An analogy might help. Suppose you’re in a restaurant kitchen. Two orders come in, one for toast and one for eggs.

A synchronous workflow would be: put the bread in the toaster, wait for the toaster to pop, deliver the toast, put the eggs on the grill, wait for the eggs to cook, deliver the eggs. The worker – you – does nothing while waiting except sit there and wait.

An asynchronous but non-parallel workflow would be: put the bread in the toaster. While the toast is toasting, put the eggs on the grill. Alternate between checking the eggs, checking the toast, and checking to see if there are any new orders coming in that could also be started.

Whichever one is done first, deliver first, then wait for the other to finish, again, constantly checking to see if there are new orders.

An asynchronous parallel workflow would be: you just sit there waiting for orders. Every time an order comes in, go to the freezer where you keep your cooks, thaw one out, and assign the order to them. So you get one cook for the eggs, one cook for the toast, and while they are cooking, you keep on looking for more orders. When each cook finishes their job, you deliver the order and put the cook back in the freezer.

You’ll notice that the second mechanism is the one actually chosen by real restaurants because it combines low labour costs – cooks are expensive – with responsiveness and high throughput. The first technique has poor throughput and responsiveness, and the third technique requires paying a lot of cooks to sit around in the freezer when you really could get by with just one.

NV: If async does not start a new thread in the background how can it perform I/O bound operations and not block the UI thread?

EL: Magic!

No, not really.

Remember, fundamentally I/O operations are handled in hardware: there is some disk controller or network controller that is spinning an iron disk or varying the voltage on a wire, and that thing is running independently of the CPU.

The operating system provides an abstraction over the hardware, such as an I/O completion port. The exact details of how many threads are listening to the I/O completion port and what they do when they get a message, well, all that is complicated.

Suffice to say, you do not have to have one thread for each asynchronous I/O operation any more than you would have to hire one admin assistant for every phone call you wanted answered.

NV: What feature offered by another language do you envy the most and would like to see in C#?

EL: Ah, good question.
That’s a tricky one because there are languages that have features that I love which actually, I don’t think would work well in C#.

Take F# pattern matching for example. It’s an awesome feature. In many ways it is superior to more traditional approaches for taking different actions on the basis of the form that some data takes.But is there a good way to hammer on it so that it looks good in C#? I’m not sure that there is. It seems like it would look out of place.

So let me try to think of features that I admire in other languages but I think would work well in C#. I might not be able to narrow it down to just one.

Scala has a lot of nice features that I’d be happy to see in C#. Contravariant generic constraints, for example. In C# and Scala you can say “T, where T is Animal or more specific”. But in Scala you can also say “T, where T is Giraffe or less specific”. It doesn’t come in handy that often but there are times when I’ve wanted it and it hasn’t been there in C#.

There’s a variation of C# called C-Omega that Microsoft Research worked on. A number of features were added to it that did not ever get moved into C# proper. One of my favorites was a yield foreach construct that would automatically generate good code to eliminate the performance problem with nested iterators. F# has that feature, now that I think of it. It’s called yield! in F#, which I think is a very exciting way to write the feature!

I could go on for some time but let’s stop listing features there.

NV:What will the feature set of C# 6.0 be?

EL:I am under NDA and cannot discuss it in details, so I will only discuss what Mads Torgersen has already disclosed in public. Mads did a “Future of C#” session in December of last year. He discussed eight or nine features that the C# language design team is strongly considering for C# 6.0. If you read that list carefully — Wesner Moise has a list here

http://wesnerm.blogs.com/net_undocumented/2013/12/mads-on-c-60.html

– you’ll see that there is no “major headliner” feature.

I’ll leave you to draw your own conclusions from that list.

Incidentally, I knew Wesner slightly in the 1990s. Among his many claims to fame is he invented the pivot table. Interesting guy.

NV: Java as tortured as it might be, revitalizes itself due to Linux and the popularity of mobile devices. Does .NET’s and C#’s future depend on the successful adoption of Windows by the mobile devices ?

EL: That’s a complicated question, as are all predictions of the future.

But by narrowly parsing your question and rephrasing it into an — I hope — equivalent form, I think it can be answered. For the future of technology X to depend on the success of technology Y means “we cannot conceive of a situation in which Y fails but X succeeds”.

So, can we conceive of a situation in which the market does not strongly adopt Windows on mobile devices, but C# is adopted on mobile devices? Yes, absolutely we can conceive of such a situation.

Xamarin’s whole business model is predicated on that conception. They’ve got C# code running on Android, so C# could continue to have a future on the mobile platform even if Windows does not get a lot of adoption.

Or, suppose both Microsoft fails to make headway on Windows on mobile and Xamarin fails to make headway on C# on Android, etc. Can we conceive of a world in which C# still has a future? Sure.

Mobile is an important part of the ecosystem, but it is far from the whole thing. There are lots of ways that C# could continue to thrive even if it is not heavily adopted as a language for mobile devices.

If the question is the more general question of “is C# going to thrive?” I strongly believe that it is. It is extremely well placed: a modern programming language with top-notch tools and a great design and implementation team.

NV: Do you think that C# and the managed world as a whole, could be “threatened” by C++ 11 ?

EL: Is C# “threatened” by C++11?

Short answer: no

There’s a saying amongst programming language designers – I don’t know who said it first – that every language is a response to the perceived shortcomings of another language.

C# was absolutely a response to the shortcomings of C and C++. (C# is often assumed to be a response to Java, and in a strategic sense, it was a response to Sun. But in a language design sense it is more accurate to say that both C# and Java were responses to the shortcomings of C++.)

Designing a new language to improve upon an old one not only makes the world better for the users of the new language, it gives great ideas to the proponents of the old one. Would C++11 have had lambdas without C# showing that lambdas could work in a C-like language? Maybe. Maybe not. It’s hard to reason about counterfactuals.
But I think it is reasonable to guess that it was at least a factor.

Similarly, if there are great ideas in C++11 then those will inform the next generation of programming language designers. I think that C++ has a long future ahead of it still, and I am excited that the language is still evolving in interesting ways.

Having choice of language tools makes the whole developer ecosystem better. So I don’t see that as a threat at all. I see that as developers like me having more choice and more power tools at their disposal.

NV: What is you reply to the voices saying that C# has grown out of proportion and that we’ve reached the point that nobody except its designers can have a complete understanding of the language ?

EL: I often humorously point out that the C# specification begins with “C# is a simple language” and then goes on for 800 dense pages. It is very rare for users to write even large programs that use all the features of C# now. The language has undoubtedly grown far more complex in the last decade, and the language designers take this criticism very seriously.

The designers work very hard to ensure that new features are “in the spirit” of the language, that design principles are applied consistently, that features are as orthogonal as possible, and that the language grows slowly enough that users can keep up, but quickly enough to meet the needs of modern programmers. This is a tough balance to strike, and I think the designers have done an exceptionally good job of it.

NV: Where is programming as an industry heading at and will an increasingly smarter compiler that will make programming accessible to anyone, something very positive of course , on the other hand pose a threat to the professional’s programmer’s job stability?

EL: I can’t find the figure right now, but there is a serious shortage of good programmers in the United States right now. A huge number of jobs that require some programming ability are going unfilled. That is a direct brake upon the economy. We need to either make more skilled programmers, or making writing robust, correct, fully-featured, usable programs considerably easier. Or, preferably, both.

So no, I do not see improvements in language tools that make it easier for more and more people to become programmers as any kind of bad thing for the industry. Computers are only becoming more ubiquitous. What we think of as big data today is going to look tiny in the future, and we don’t have the tools to effectively manage what we’ve got already.

There is going to be the need for programmers at every level of skill working on all kinds of problems, some of which haven’t even been invented yet. This is why I love working on developer tools; I’ve worked on developer tools my whole professional life. I want to make this hard task easier. That’s the best way I can think of to improve the industry.

Link to the full interview on i-programmer

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He writes articles, conducts interviews and reviews technical IT books

Helen Schell at Maker Faire, UK

2014 April 16

Helen Schell will be exhibiting once again at the Newcastle Maker Faire on 26th-27th April at the  Centre for Life.

 

HS-BeamDress13Beam Dress bc

She will be showing her latest creation the Beam Dress, which was created for Light up the Streets in Lancaster, UK, last winter.  It is one of a series of Smart Materials dresses exploring light reactive materials. In December, 2013, it was also displayed at the Mattereality Conference at the Scottish Museum of Modern Art, Edinburgh.

With the Beam Dress, her recent short film; UN-Dress with be also screened. This is a continuation of her previous creations that you may have seen at other shows. UN-Dress was a ball-gown made from dissolvable thermoplastic and was created for the Undress: Redress project in 2012. It was filmed, dissolving, during a performance at the Whitley Film Festival last year.

undress 1 undress 3

undress 2

 

 

This dress was commissioned by Science Learning Centre North East as part of The Fashioning Science project.

This project also included, The Dazzle Dress: This was made from Hi Vis safety jackets creating a futuristic and unusual ball-gown. It has been exhibited at the London Science Museum, Newcastle Maker Faire and Durham Cathedral. The project was short-listed for the North East Culture Awards in 2013.

Helen Schell is a visual artist and ESERO- UK Space Ambassador who specialises in artworks about space exploration and the science of the cosmos, and is based at NewBridge Project,  in the north east of England.

She organises exhibitions, residencies and children’s education projects by using art and craft techniques to communicate science. Through diverse projects, she invents inclusive activities to get participants interested in space and future technology. She makes large mixed media art installations and paintings, often described as ‘laboratories’.

In 2010, she was artist in residence at Durham University’s Ogden Centre for Fundamental Physics, where she collaborated with scientists and created a Space-Time Laboratory.

In 2012, as ‘Maker in Residence’ at the Newcastle Centre for Life, she created ‘Make it to the Moon’, an interactive education experience, mainly for children, where they imagine setting up a colony on the Moon. These workshops also went to the London Science Museum, the British Science Festival and Arts Catalyst. Other projects include being a judge for NASA’s children’s art competition ‘Humans in Space’.

Always aiming to reach diverse groups, in 2013 space art projects and workshops include Durham Cathedral, Hexham Abbey, and Gateshead Library for the Festival of the North East. In 2014, she created a project with Royal Holloway University – Invisible to Visible, exploring Dark Matter, Extra Spatial Dimensions and CERN, through a series experimental art books and a large family workshop.

The Moon Rocket, Hi Vis Ball-gowns and UN-Dress film will be show at Loncon3, ExCeL Centre, London Docklands; 14-18 August 2014

For further information, please visit the Newbridge Project.