Skip to content

From books to software

2014 June 18
by Josette Garcia

logo_color-(JPEG)_display2I have some exciting news to share – I have joined 2ndQuadrant as Community Manager. It is a rather different role for me even though I have been working with the Open Source community for years – Perl, Python, PHP, Linux have no secrets but then why am I so nervous?  Could the reason be that a book is a very physical thing and even though I did not understand the content, I could always grab one and read the preface or about the animal on the cover. Software for a non-techie is rather elusive, full of mysteries –can’t see anything, can’t touch anything but I am sure I will get used to it.

Why 2ndQuadrant?

  • You guessed 2ndQuadrant work with the Open Source community and provide lots of add-ins to PostgreSQL which are free for everyone to work on.
  • I like the philosophy of 2ndQuadrant as a big part of the revenue is ploughed back into the community.
  • It is a worldwide company with offices in US, UK, France, Italy, Germany, Nordic region and South America.
  • Collectively 2ndQuadrant provide over 100 man years of PostgreSQL expertise in all areas of the Database Lifecycle Management process.

First days out

I will be attending the following conferences:

  • CHARChar(14), Milton Keynes, 8th July – for anybody interested in  Clustering, High Availability and Replication and much more. 2ndQuadrant and Translattice will be presenting the important new technologies of BiDirectional Replication (BDR). BDR will be explained in more details offering you the first opportunity to understand how it works and understand what implementation options are available – directly from the developers.
  • PG Day LogoPGDay, Milton Keynes, 9th July – With talks on PostgreSQL features in 9.4, Migrating rant & rave to PostgreSQL, Business Intelligence with Window Functions, New BI features from the AXLE project and much more, PGDay focuses on bringing PostgreSQL users to a whole new level of understanding. It will cover the core topics you need to be successful with PostgreSQL and will give you the opportunity to network with fellow users.

“My community”

If the members of your User Group, colleagues from your Company or Institution are interested in PostgreSQL, please let me know as I would like to build a bigger, stronger community around this database.  Also please let me know of any conferences, events we should attend or anything else I should look into.  You will find my email on the “About Me” page.

All about Dancer – interview of Sawyer X part 3 and last

2014 May 14
by Nikos Vaggalis

dancer-logoNV: Dancer and web services, where do I start ? Conceptually, is REST only about web services ?

SX: While REST (REpresentational State Transfer) is not limited to web services, it’s most widely used in that context. It’s a declared structure with which you can define and provide a consistent and understandable web service.

As Dancer tries to be opinionated in moderation, it provides a plugin (Dancer::Plugin::REST and Dancer2::Plugin::REST) to help you go about defining RESTful routes. It helps you to easily define resources in your application so you get serialization and routes easily set up.

Sometimes it’s hard for me to get used to new tools, so I haven’t used that plugin yet. I generally define my own paths for everything. While I suggest you take a look at it, I should probably do the same.

NV: What’s in the project’s wish-list, where is it heading at, and what can we expect in the future?

SX: We’re focusing on several issues at the moment, which all seem to be congruent with each other: transition users to Dancer 2, overhaul the documentation, improve our test coverage, further decouple internal entities, streamline our coordination, and strip code further.

We’ve made a lot of progress recently, much of it thanks to new core members, and more corporate companies (such as Booking.com) sponsoring hackathons, allowing us to focus more time on these. The attention we receive from our community is invigorating, and pushes us to continue work on the project, and invest time in it. It gives us an insight on how worthwhile it really is, and it makes our work a pleasure.

NV: Perl vs PHP vs Ruby vs language X, for the web. Why Perl has fallen out of favour with web devs and what can be done about it?

SX: While I have been working with Perl for a long while, and started back when CGI was pretty much it, others have much more experience, and might be able to answer this question better than I can. This is my rough reasoning, and I may be completely off on this.

I believe the downfall of Perl as the dominating web language was due to our apathy at the time. As soon as we ruled the web with CGI we were lulled into a false sense of security in that position. In the mean time, other languages were trying to get their bearings and come up with something that could compete with it. It took some time, but they came up with better things, while we pretty much stalled behind.

WSGI was done in Python. Then Ruby’s Rack came around. It took some time until we realized those were good and we should have that too, finally provided by Miyagawa as PSGI/Plack. Now our problem is that a lot of people are still not moving onwards to it, and are stuck with arcane methods of web programming, not understanding the value of moving forward.

It’s important to remember that no single language can truly “rule” the web anyway. Perl had its glory days, and they are over. Then PHP had its, and it was over as soon as people realized PHP is not even a real language, and so happened with Ruby and with Rails. Others will have their turn for 15 minutes of fame, and that will be over as well. We will eventually end up with multiple languages (and PHP) and a multitude of web programming abilities, which is a bright future for all of us – except those who will have to work with PHP, of course.

The crucial bit is not to stay behind the curve on new developments, and to push to create new things where appropriate. We shouldn’t just relax with what we have, we should always demand more of ourselves and try and create it, and not wait for other languages to say “hey, this sucks, let’s try fixing it”. That’s what we’re known for, so let’s get back to that.

NV: Your “CGI.pm must die” talk has gone viral. Is CGI.pm really that bad ?

SX: CGI.pm wasn’t the module we deserved, but the module we needed. At the time, it was the best thing that happened for Perl and for the web. However, those days had passed. Unfortunately, while Perl evolved, some people stayed at the decade of CGI.pm. We won’t reach far if we’re still sticking to the best and latest of 1995. Some of us are quite literally stuck in the previous century, it’s not even funny. Well, it is a bit. It’s also sad.

People often ask me “is CGI.pm that horrible?” and the answer is that, in all honesty, yes, it really is! But that’s not why I go completely apeshit about it. If I may quote a great poet, “it’s about sending a message”. If I would have given a talk entitled “use PSGI/Plack”, it wouldn’t stick as much as suggesting to kill a module with fire, now would it?

We should all be thankful to Lincoln D. Stein who gave us CGI.pm, and now retire it in favor of PSGI/Plack. I had received an email from Lincoln saying he watched the talk I gave, enjoyed it (which was a great honor for me), and fully agrees with moving forward. And while we’re moving onwards to bigger and better, we should check out the new stuff Lincoln has been working on, like his VM::EC2 module.

NV: Would you someday consider switching from Perl 5 to Perl 6? If so, what are your thoughts on Perl 6 and given the opportunity, would you someday re-write Dancer in it?

SX: I would love a chance to work with Perl 6 in a professional capacity, but I don’t see it in the near future. It’s a shame, because, in my opinion, it’s by far the most innovative and interesting language available today.

We’ve all been hoping Perl 6 will hit the ground running, and it took some time to realize it isn’t that simple. However, nowadays Perl 6 interpreters have been releasing regularly, and there’s work being done to get Perl 5 and Perl 6 closer, both community-wise and technically-wise.

Some amazing ideas came out of Perl 6, some of which were ported to Perl 5, some of which successfully. When it comes to language features and ability, Perl 6 has done a lot of right, even though it also made several mistakes. Hindsight is 20/20, and if we could go back, things would have been done differently. All in all, I think it’s best for us all to concentrate on the current state and the future – and those look bright.

I will likely not have to rewrite Dancer in Perl 6 because a bare-bones Dancer port has already been written by tadzik (Tadeusz Sosnierz) called Bailador. I haven’t looked at the internals too much, so I’m not sure if the design flaws we had to deal with exist there too. Either way, I’m certain it’s in good hands, and I hope that in the future I will be able to contribute to that.

I just want to add one last note, if I may. I want to thank our community, who push us closer together, while pushing us to work harder at the same time. It’s a great joy and delight. And I want to also thank the core team for being a wonderful gang to work with, each and every single one. And I’d like to thank you, for giving me the opportunity to talk about Perl and Dancer.

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programs in both Perl and C#. He writes articles, conducts interviews and reviews technical IT books

All about Dancer – interview of Sawyer X part 2

2014 May 2
by Nikos Vaggalis

NV: Is Dancer 2 a complete re-write and why? what shortcomings of the first version does it address ?

SX:Dancer 2 is indeed a complete rewrite of the code, and for good reason.

Dancer started as a fork of a fun web framework in Ruby called Sinatra. The founder of Dancer, Alexis Sukrieh, being a polyglot (programming in both Perl and Ruby), had used Sinatra, and wished to have a similar framework in Perl.

As Dancer evolved through its users and community, gaining numerous additional features, it became apparent that some of the original design decisions regarding architecture were a problem. Specifically, engines, which are the core tenants of Dancer, are all singletons. This means every Dancer subsystem (and consequently, every code you write in Dancer in the same process) will share the same engine.

An interesting example is the serializer. This means that one piece of code in which you want automatic serialization would require all your other pieces of code to work with serialization. You cannot control that.

When we realized we could not force Dancer to do the right thing when it came to engines, we resorted to rewriting from scratch. This allowed several improvements: using a modern object system (Moo), having contextual engines and DSL, and decoupled mini-applications, called Dancer apps.

NV:There is a lot of talk on Plack/PSGI. What is it and what is the advantage of hooking into it ?

SX:PSGI is a protocol, which means it’s literally a paper describing how a server and application should speak to each other. It includes the parameters each expects and how they should respond. In essence, it’s an RFC for a new standardized protocol.

Plack is a set of tools for writing PSGI applications and servers. We can use Plack as reference implementation, or as actual utilities for working with any layer of the PSGI stack, whether it’s the server or the client.

PSGI, inspired by Python’s WSGI and Ruby’s Rack, has many qualities which rendered it an instant success: it’s simple, understandable, works across web servers, includes advanced features such as application mounting, middlewares, long polling requests, and even asynchronous responses.

This deemed Plack/PSGI a solid foundation for writing web servers and web frameworks. All major web frameworks in Perl support PSGI, and many web servers started appearing, offering ranges of features from pre-forking to event-based serving: Starman, Twiggy, Starlet, Feersum, and many more.

NV:What functionality do I get out of the box and what tasks does Dancer take care of for me so I don’t have to? For example does it include measures of preventing XSS attacks or SQL injection? Or implementing a variety of authentication schemes?

SX:Dancer attempts to be a thin layer over your application code. It provides keywords to help you navigate through the web of… web programming. :)

If you want to define dispatching for your application paths, these are our routes. If you want to check for sessions, we have syntax to store them and retrieve them across different session storages. If you need to set cookies, we have easy syntax for that.

The idea with Dancer is that it gives you all the small bits and pieces to hook up your application to your web users, and then it vanishes in the background and stays out of your way.

We make an effort of making sure we provide you with code that is flexible, performant, and secure. We take security patches seriously, and our code is publicly available, so security audits are more than welcome.

NV:Plugins and extensibility. How easy is to extend the DSL, consume CPAN modules, hook plugins into it ? What are some of the most useful plugins that I can choose from? (engines for template,session,authentication,dbms etc)

SX:When you call “use Dancer2” in order to write your web code, DSL is generated explicitly for your scope. It can be different than another scope. The reason for this is so you could use modules that extend that DSL. This is how plugins work.

It’s very important to note that we promote using Plack middlewares (available under the Plack::Middleware class), so your code can work across multiple web frameworks. Still, there are quite a few Dancer plugins to achieve various tasks through Dancer.

There is a list of recommended modules in Task::Dancer and here are a few I would recommend checking out:

  • Dancer::Plugin::REST – Writing RESTful apps quickly
  • Dancer::Plugin::Database – Easy database connections
  • Dancer::Plugin::Email – Integrate email sending
  • Dancer::Plugin::NYTProf – Easy profiling for Dancer using NYTProf
  • Dancer::Plugin::SiteMap – Automatically generate a sitemap
  • Dancer::Plugin::Auth::Tiny – Authentication done right
  • Dancer::Plugin::Cache::CHI – CHI-based caching for Dancer

NV:What about dependencies to third party components? is it lightly or heavily affected?

SX:I love this question, because it allows me to talk about our lovely community.

We try to be community-oriented. Our community is what fuelled Dancer from a simple web dispatching piece of code to a competitor for full-fledged production websites and products.

The original assumption with Dancer was that dependencies are a problem. While it is possible to reinvent the wheel, it comes at a cost. We tried to balance it out by having as few dependencies as possible, and reinventing where needed.

With time, however, the community expressed a completely different view of the matter. People said, “we don’t give a damn about dependencies. If we can install Dancer, we can install dependencies.”

By the time Dancer 2 came around, we already had so many ways to install dependencies in Perl, that it really wasn’t a problem anymore. We have great projects like FatPacker, local::lib, carton, Pinto, and more. This allowed us to remove a lot of redundant code in Dancer, to make it faster, easier to handle, more predictable, and even add features. The community was very favorable to that, and we’re happy we made that decision.

So our current approach is “if we need a dependency, we’ll add it”. Last release, actually, we removed a dependency. We just didn’t need it. Our current stack is still relatively small. I think we have a balance, and we’re always open to more feedback from the community about this.

I’ll take every chance to talk about how the community is driving the project. :)

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programs in both Perl and C#. He writes articles, conducts interviews and reviews technical IT books

All about Dancer – interview of Sawyer X

2014 April 25
by Nikos Vaggalis

dancer-logoAfter we looked into Catalyst, we continue our exploration of Perl’s land of Web Frameworks, with Dancer.

We talk about it to one of the core devs, Sawyer X, who kindly answered our questions in a very detailed and explanatory way, rendering the interview enjoyable and comprehensible even by non techies.

The interview, which spans three parts (to be published weekly), did not stop there however; we also asked his opinion and knowledge on finer grained aspects of the craft that is developing for the Web, such as what the advantage of routing over query strings is, MVC vs Routes, Perl vs PHP vs Ruby, and why CGI.pm must die!

NV:The term might be overloaded, but is Dancer a MVC or should I say a “Route” based framework ? what’s the difference?

SX:Usually MVC frameworks are defined by having a clear distinction between the model (your data source, usually a database), the view (your display, usually a template), and the controller (your application code).

While Dancer helps you to easily separate them (such as providing a separate directory for your templates called “views”, by default), it doesn’t go as far as some other frameworks in how much it confines you to those conventions.

I like to describe Dancer as “MVC-ish”, meaning it has a very clear notion of MVC, but it isn’t restrictive. If you’re used to MVC, you will feel at home. If you’re not, or don’t wish to have full-fledged MVC separation, you aren’t forced to have such either.

Most web frameworks use – what I would call – namespace-matching, in which the path you take has an implicit translation to a directory structure, a function, and the optional arguments. For example, the path /admin/shop/product/42 implicitly states it would (most likely) go to a file called Admin/Shop.pm, to a function called product, and will provide the parameter 42 to the function. The implicit translation here is from the URL /admin/shop/product/42 to the namespace Admin::Shop::product, and 42 as the function parameter.

Route-based frameworks declare explicitly what code to run on which path. You basically declare /admin/shop/product/$some_id will run a piece of code. Literally that is it. There is no magic to describe since there is no translation that needs to be understood.

NV:What is the advantage of routing and why the traditional model of query strings is not enough?

SX:The route-based matching provides several essential differences:

  • It is very simple for a developer to understand
  • It takes the user perspective: if the user goes to this path, run this code
  • It stays smaller since it doesn’t require any specific structure for an implicit translation to work, unlike namespace-matching

The declarative nature of route-based frameworks are quite succinct and dictate which code to run when. As explained, you are not confined to any structure. You can just settle for a single line:

get ‘/admin/shop/product/:id’ => sub {…}

This provides a lot of information in a one line of code. No additional files, no hierarchy. It indicates we’re expecting GET requests which will go to /admin/shop/product/:id. The :id is an indication that this should be a variable (such as a number or name), and we want it to be assigned the name id so we could then use it directly. When that path is reached, we will run that subroutine. It really is that simple. We could reach that variable using a Dancer keyword, specifically param->{‘id’}.

NV:Dancer is a complete feature-rich DSL. Does this mean that I write code in a dedicated language and not Perl?

SX:All the web layer done through Dancer can be done in the DSL, but the rest of your application is done in Perl. Dancer just provides a comfortable, clean, beautiful language to define your web layer and to add parts to it (like cookies, different replies, template rendering, etc.). In Dancer2 the DSL is built atop a clean object-oriented interface and provides nice keywords to it.

That is really what a web framework should do: provide a nice clean layer on top of your application.

I guess a better way would be to say you write your code in Perl with dedicated functions that look nicer. :)

NV:There are many web frameworks out there each one targeting various “problem” areas of web development. Which ones does Dancer address?

Dancer provides a sane thin layer for writing stable websites. It introduces relatively few dependencies, but doesn’t reinvent everything. It uses sane defaults and introduces basic recommended conventions, but isn’t too opinionated and remains flexible in the face of a multitude of use cases.

NV:What about the other Perl frameworks, Catalyst and Mojolicious? How do they compare to Dancer?

SX:Catalyst, as great a framework as it is, is pretty big. It uses an enormous amount of modules and clearly very opinionated. This is not necessarily a bad thing, but it might not be what you’re interested in.

Mojolicious pushes the boundaries of HTML5 programming, proving all the eye-candy features HackerNews buzzes about, and is very successful at that.

Dancer fills in the niche between those. It provides a stable base for your website. It does not depend on as many modules, but it does not reinvent every single wheel in existence. It’s the “oh my god, this is how my production websites now look like!” call of gleeful cheer. :)

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programs in both Perl and C#. He writes articles, conducts interviews and reviews technical IT books

C# Guru – An Interview With Eric Lippert

2014 April 22
by Nikos Vaggalis

Eric Lippert’s name is synonymous with C#. Having been Principal Developer at Microsoft on the C# compiler team and a member of the C# language design team he now works on C# analysis at Coverity.

If you know C# then the name Eric Lippert will be synonymous with clear explanations of difficult ideas and insights into the way languages work and are put together.

Here we host an overall summary of the highlights of the interview ranging over topics as diverse as the future of C#, asynchronous v parallel, Visual Basic and more (the link to the full interview on i-programmer can be found at the end of this page), so read on because you will surely find something to interest you about C#, languages in general or just where things are heading.

NV : So Eric, after so many years at Microsoft you began a new career at Coverity. Was the ‘context switch’ easy?

EL : Yes and no. Some parts of it were very easy and some took some getting used to.

For example, re-learning how to use Unix-based development tools, which I had not touched since the early 1990s, took me a while. Git is very different than Team Foundation Studio. And so on. But some things were quite straightforward.

Coverity’s attitude towards static analysis is very similar to the attitude that the C# compiler team has about compiler warnings, for instance. Though of course the conditions that Coverity is checking for are by their nature much more complicated than the heuristics that the C# compiler uses for warnings.

Switching from taking a bus to downtown every day instead of taking a bus to Redmond every day was probably the easiest part!

NV: I guess that from now on you’ll be working on the field of static analysis. What exactly does static analysis do?

EL: Static analysis is a very broad field in both industry and academia. So let me first start very wide, and then narrow that down to what we do at Coverity.

Static analysis is analysis of programs based solely on their source code or, if the source code is not available, their compiled binary form. That is in contrast with dynamic analysis, which analyses program behavior by watching the program run. So a profiler would be an example of dynamic analysis; it looks at the program as it is running and discovers facts about its performance, say.

Any analysis you perform just by looking at the source code is static analysis. So for example, compiler errors are static analysis; the error was determined by looking at the source code.

So now let’s get a bit more focused. There are lots of reasons to perform static analysis, but the one we are focused on is the discovery of program defects. That is still very broad. Consider a defect such as “this public method violates the Microsoft naming guidelines”. That’s certainly a defect. You might not consider that a particularly interesting or important defect, but it’s a defect.

Coverity is interested in discovering a very particular class of defect.
That is, defects that would result in a bug that could realistically affect a user of the software. We’re looking for genuine “I’m-glad-we-found-that-before-we-shipped-and-a-customer-lost-all-their-work” sort of bugs. Something like a badly named method that the customer is never going to notice.

NV: Do Code contracts play a role, and will the introduction of Roslyn affect the field of static analysis?

EL: Let me split that up into two questions. First, code contracts.
So as you surely know, code contracts are annotations that you can optionally put into your C# source code that allow you to express the pre-condition and post-condition invariants about your code. So then the question is, how do these contracts affect the static analysis that Coverity does? We have some support for understanding code contracts, but we could do better and one of my goals is to do some research on this for future versions.

One of the hard things about static analysis is the number of possible program states and the number of possible code paths through the program is extremely large, which can make analysis very time consuming. So one of the things we do is try to eliminate false paths — that is, code paths that we believe are logically impossible, and therefore do not have to be checked for defects. We can use code contracts to help us prune false paths.

A simple example would be if a contract says that a precondition of the method is that the first argument is a non-null string, and that argument is passed to another method, and the second method checks the argument to see if it is null. We can know that on that path – that is, via the first method – the path where the null check says “yes it is null” is a false path. We can then prune that false path and not consider it further. This has two main effects. The first is, as I said before, we get a significant performance gain by pruning away as many false paths as possible. Second, a false positive is when the tool reports a defect but does so incorrectly. Eliminating false paths greatly decreases the number of false positives. So we do some fairly basic consumption of information from code contracts, but we could likely do even more.

Now to address your second question, about Roslyn. Let me first answer the question very broadly. Throughout the industry, will Roslyn affect static analysis of C#? Absolutely yes, that is its reason for existing.

When I was at Microsoft I saw so many people write their own little C# parsers or IDEs or little mini compilers or whatever, for their own purposes. That’s very difficult, it’s time-consuming, it’s expensive, and it’s almost impossible to do right. Roslyn changes all that, by giving everyone a library of analysis tools for C# and VB which is correct, very fast, and designed specifically to make tool builder’s lives better.
I am very excited that it is almost done! I worked on it for many years and can’t wait to get my hands on the release version.

More specifically, will Roslyn affect static analysis at Coverity? We very much hope so. We work closely with my former colleagues on the Roslyn team. The exact implementation details of the Coverity C# static analyzer are of course not super-interesting to customers, so long as it works. And the exact date Roslyn will be available is not announced.

So any speculation as to when there will be a Coverity static analyzer that uses Roslyn as its front end is just that — speculative. Suffice to say that we’re actively looking into the possibility.

NV: What other possibilities does Roslyn give rise to? Extending the language, macros/mutable grammars, Javascript like Eval, REPL?

EL: Some of those more than others.
Let me start by taking a step back and reiterating what Roslyn is, and is not. Roslyn is a class library usable from C#, VB or other managed languages.Its purpose is to enable analysis of C# and VB code. The plan is for future versions of the C# and VB compilers and IDEs in Visual Studio to themselves use Roslyn.

So typical tasks you could perform with Roslyn would be things like:

  • “Find all usages of a particular method in this source code”
  • “Take this source code and give me the lexical and grammatical analysis”
  • “Tell me all the places this variable is written to inside this block”

Let me quickly say what it is not. It is not a mechanism for customers to themselves extend the C# or VB languages; it is a mechanism for analyzing the existing languages. Roslyn will make it easier for Microsoft to extend the C# and VB languages, because its architecture has been designed with that in mind. But it was not designed as an extensibility service for the language itself.

You mentioned a REPL. That is a Read-Eval-Print Loop, which is the classic way you interface with languages like Scheme. Since the Roslyn team was going to be re-architecting the compiler anyway they put in some features that would make it easier to develop REPL-like functionality in Visual Studio. Having left the team, I don’t know what the status is of that particular feature, so I probably ought not to comment on it further.

One of the principle scenarios that Roslyn was designed for is to make it much easier for third parties to develop refactorings. You’ve probably seen in Visual Studio that there is a refactoring menu and you can do things like “extract this code to a method” and so on.
Any of those refactorings, and a lot more, could be built using Roslyn.

As for if there will be an eval-like facility for spitting fresh code at runtime, like there is in JavaScript, the answer is sort of. I worked on JavaScript in the late 1990s, including the JScript.NET langauge that never really went anywhere, so I have no small experience in building implementations of JS “eval”. It is very hard. JavaScript is a very dynamic language; you can do things like introduce new local variables in “eval” code.

There is to my knowledge no plan for that sort of very dynamic feature in C#. However, there are things you can do to solve the simpler problem of generating fresh code at runtime. The CLR of course already has Reflection Emit. At a higher level, C# 3.0 added expression trees. Expression trees allow you to build a tree representing a C# or VB expression at runtime, and then compile that expression into a little method. The IL is generated for you automatically.

If you are analysing source code with Roslyn then there is I believe a facility for asking Roslyn “suppose I inserted this source code at this point in this program — how would you analyze the new code?”

And if at runtime you started up Roslyn and said “here’s a bunch of source code, can you give me a compiled assembly?” then of course Roslyn could do that. If someone wanted to build a little expression evaluator that used Roslyn as a lightweight code generator, I think that would be possible, but I’ve never tried it.

It seems like a good experiment. Maybe I’ll try to do that.

NV:Although, the TPL and async/await were great additions to both C# and the framework, they were also cause of a lot of commotion, generating more questions than answers:

What’s the difference between Asynchrony and Parallelism?

EL: Great question. Parallelism is one technique for achieving asynchrony, but asynchrony does not necessarily imply parallelism.

An asynchronous situation is one where there is some latency between a request being made and the result being delivered, such that you can continue to process work while you are waiting. Parallelism is a technique for achieving asynchrony, by hiring workers – threads – that each do tasks synchronously but in parallel.

An analogy might help. Suppose you’re in a restaurant kitchen. Two orders come in, one for toast and one for eggs.

A synchronous workflow would be: put the bread in the toaster, wait for the toaster to pop, deliver the toast, put the eggs on the grill, wait for the eggs to cook, deliver the eggs. The worker – you – does nothing while waiting except sit there and wait.

An asynchronous but non-parallel workflow would be: put the bread in the toaster. While the toast is toasting, put the eggs on the grill. Alternate between checking the eggs, checking the toast, and checking to see if there are any new orders coming in that could also be started.

Whichever one is done first, deliver first, then wait for the other to finish, again, constantly checking to see if there are new orders.

An asynchronous parallel workflow would be: you just sit there waiting for orders. Every time an order comes in, go to the freezer where you keep your cooks, thaw one out, and assign the order to them. So you get one cook for the eggs, one cook for the toast, and while they are cooking, you keep on looking for more orders. When each cook finishes their job, you deliver the order and put the cook back in the freezer.

You’ll notice that the second mechanism is the one actually chosen by real restaurants because it combines low labour costs – cooks are expensive – with responsiveness and high throughput. The first technique has poor throughput and responsiveness, and the third technique requires paying a lot of cooks to sit around in the freezer when you really could get by with just one.

NV: If async does not start a new thread in the background how can it perform I/O bound operations and not block the UI thread?

EL: Magic!

No, not really.

Remember, fundamentally I/O operations are handled in hardware: there is some disk controller or network controller that is spinning an iron disk or varying the voltage on a wire, and that thing is running independently of the CPU.

The operating system provides an abstraction over the hardware, such as an I/O completion port. The exact details of how many threads are listening to the I/O completion port and what they do when they get a message, well, all that is complicated.

Suffice to say, you do not have to have one thread for each asynchronous I/O operation any more than you would have to hire one admin assistant for every phone call you wanted answered.

NV: What feature offered by another language do you envy the most and would like to see in C#?

EL: Ah, good question.
That’s a tricky one because there are languages that have features that I love which actually, I don’t think would work well in C#.

Take F# pattern matching for example. It’s an awesome feature. In many ways it is superior to more traditional approaches for taking different actions on the basis of the form that some data takes.But is there a good way to hammer on it so that it looks good in C#? I’m not sure that there is. It seems like it would look out of place.

So let me try to think of features that I admire in other languages but I think would work well in C#. I might not be able to narrow it down to just one.

Scala has a lot of nice features that I’d be happy to see in C#. Contravariant generic constraints, for example. In C# and Scala you can say “T, where T is Animal or more specific”. But in Scala you can also say “T, where T is Giraffe or less specific”. It doesn’t come in handy that often but there are times when I’ve wanted it and it hasn’t been there in C#.

There’s a variation of C# called C-Omega that Microsoft Research worked on. A number of features were added to it that did not ever get moved into C# proper. One of my favorites was a yield foreach construct that would automatically generate good code to eliminate the performance problem with nested iterators. F# has that feature, now that I think of it. It’s called yield! in F#, which I think is a very exciting way to write the feature!

I could go on for some time but let’s stop listing features there.

NV:What will the feature set of C# 6.0 be?

EL:I am under NDA and cannot discuss it in details, so I will only discuss what Mads Torgersen has already disclosed in public. Mads did a “Future of C#” session in December of last year. He discussed eight or nine features that the C# language design team is strongly considering for C# 6.0. If you read that list carefully — Wesner Moise has a list here

http://wesnerm.blogs.com/net_undocumented/2013/12/mads-on-c-60.html

– you’ll see that there is no “major headliner” feature.

I’ll leave you to draw your own conclusions from that list.

Incidentally, I knew Wesner slightly in the 1990s. Among his many claims to fame is he invented the pivot table. Interesting guy.

NV: Java as tortured as it might be, revitalizes itself due to Linux and the popularity of mobile devices. Does .NET’s and C#’s future depend on the successful adoption of Windows by the mobile devices ?

EL: That’s a complicated question, as are all predictions of the future.

But by narrowly parsing your question and rephrasing it into an — I hope — equivalent form, I think it can be answered. For the future of technology X to depend on the success of technology Y means “we cannot conceive of a situation in which Y fails but X succeeds”.

So, can we conceive of a situation in which the market does not strongly adopt Windows on mobile devices, but C# is adopted on mobile devices? Yes, absolutely we can conceive of such a situation.

Xamarin’s whole business model is predicated on that conception. They’ve got C# code running on Android, so C# could continue to have a future on the mobile platform even if Windows does not get a lot of adoption.

Or, suppose both Microsoft fails to make headway on Windows on mobile and Xamarin fails to make headway on C# on Android, etc. Can we conceive of a world in which C# still has a future? Sure.

Mobile is an important part of the ecosystem, but it is far from the whole thing. There are lots of ways that C# could continue to thrive even if it is not heavily adopted as a language for mobile devices.

If the question is the more general question of “is C# going to thrive?” I strongly believe that it is. It is extremely well placed: a modern programming language with top-notch tools and a great design and implementation team.

NV: Do you think that C# and the managed world as a whole, could be “threatened” by C++ 11 ?

EL: Is C# “threatened” by C++11?

Short answer: no

There’s a saying amongst programming language designers – I don’t know who said it first – that every language is a response to the perceived shortcomings of another language.

C# was absolutely a response to the shortcomings of C and C++. (C# is often assumed to be a response to Java, and in a strategic sense, it was a response to Sun. But in a language design sense it is more accurate to say that both C# and Java were responses to the shortcomings of C++.)

Designing a new language to improve upon an old one not only makes the world better for the users of the new language, it gives great ideas to the proponents of the old one. Would C++11 have had lambdas without C# showing that lambdas could work in a C-like language? Maybe. Maybe not. It’s hard to reason about counterfactuals.
But I think it is reasonable to guess that it was at least a factor.

Similarly, if there are great ideas in C++11 then those will inform the next generation of programming language designers. I think that C++ has a long future ahead of it still, and I am excited that the language is still evolving in interesting ways.

Having choice of language tools makes the whole developer ecosystem better. So I don’t see that as a threat at all. I see that as developers like me having more choice and more power tools at their disposal.

NV: What is you reply to the voices saying that C# has grown out of proportion and that we’ve reached the point that nobody except its designers can have a complete understanding of the language ?

EL: I often humorously point out that the C# specification begins with “C# is a simple language” and then goes on for 800 dense pages. It is very rare for users to write even large programs that use all the features of C# now. The language has undoubtedly grown far more complex in the last decade, and the language designers take this criticism very seriously.

The designers work very hard to ensure that new features are “in the spirit” of the language, that design principles are applied consistently, that features are as orthogonal as possible, and that the language grows slowly enough that users can keep up, but quickly enough to meet the needs of modern programmers. This is a tough balance to strike, and I think the designers have done an exceptionally good job of it.

NV: Where is programming as an industry heading at and will an increasingly smarter compiler that will make programming accessible to anyone, something very positive of course , on the other hand pose a threat to the professional’s programmer’s job stability?

EL: I can’t find the figure right now, but there is a serious shortage of good programmers in the United States right now. A huge number of jobs that require some programming ability are going unfilled. That is a direct brake upon the economy. We need to either make more skilled programmers, or making writing robust, correct, fully-featured, usable programs considerably easier. Or, preferably, both.

So no, I do not see improvements in language tools that make it easier for more and more people to become programmers as any kind of bad thing for the industry. Computers are only becoming more ubiquitous. What we think of as big data today is going to look tiny in the future, and we don’t have the tools to effectively manage what we’ve got already.

There is going to be the need for programmers at every level of skill working on all kinds of problems, some of which haven’t even been invented yet. This is why I love working on developer tools; I’ve worked on developer tools my whole professional life. I want to make this hard task easier. That’s the best way I can think of to improve the industry.

Link to the full interview on i-programmer

nikosNikos Vaggalis has a BSc in Computer Science and a MSc in Interactive Multimedia. He works as a Database Developer with Linux and Ingres, and programms in both Perl and C#. He writes articles, conducts interviews and reviews technical IT books

Helen Schell at Maker Faire, UK

2014 April 16

Helen Schell will be exhibiting once again at the Newcastle Maker Faire on 26th-27th April at the  Centre for Life.

 

HS-BeamDress13Beam Dress bc

She will be showing her latest creation the Beam Dress, which was created for Light up the Streets in Lancaster, UK, last winter.  It is one of a series of Smart Materials dresses exploring light reactive materials. In December, 2013, it was also displayed at the Mattereality Conference at the Scottish Museum of Modern Art, Edinburgh.

With the Beam Dress, her recent short film; UN-Dress with be also screened. This is a continuation of her previous creations that you may have seen at other shows. UN-Dress was a ball-gown made from dissolvable thermoplastic and was created for the Undress: Redress project in 2012. It was filmed, dissolving, during a performance at the Whitley Film Festival last year.

undress 1 undress 3

undress 2

 

 

This dress was commissioned by Science Learning Centre North East as part of The Fashioning Science project.

This project also included, The Dazzle Dress: This was made from Hi Vis safety jackets creating a futuristic and unusual ball-gown. It has been exhibited at the London Science Museum, Newcastle Maker Faire and Durham Cathedral. The project was short-listed for the North East Culture Awards in 2013.

Helen Schell is a visual artist and ESERO- UK Space Ambassador who specialises in artworks about space exploration and the science of the cosmos, and is based at NewBridge Project,  in the north east of England.

She organises exhibitions, residencies and children’s education projects by using art and craft techniques to communicate science. Through diverse projects, she invents inclusive activities to get participants interested in space and future technology. She makes large mixed media art installations and paintings, often described as ‘laboratories’.

In 2010, she was artist in residence at Durham University’s Ogden Centre for Fundamental Physics, where she collaborated with scientists and created a Space-Time Laboratory.

In 2012, as ‘Maker in Residence’ at the Newcastle Centre for Life, she created ‘Make it to the Moon’, an interactive education experience, mainly for children, where they imagine setting up a colony on the Moon. These workshops also went to the London Science Museum, the British Science Festival and Arts Catalyst. Other projects include being a judge for NASA’s children’s art competition ‘Humans in Space’.

Always aiming to reach diverse groups, in 2013 space art projects and workshops include Durham Cathedral, Hexham Abbey, and Gateshead Library for the Festival of the North East. In 2014, she created a project with Royal Holloway University – Invisible to Visible, exploring Dark Matter, Extra Spatial Dimensions and CERN, through a series experimental art books and a large family workshop.

The Moon Rocket, Hi Vis Ball-gowns and UN-Dress film will be show at Loncon3, ExCeL Centre, London Docklands; 14-18 August 2014

For further information, please visit the Newbridge Project.

My own cigar box guitar

2014 April 7

Frankenstein Garage 2A few years ago I met with Andrea Maietta and Paolo Aliverti, founders of Frankenstein Garage in Milan. Frankenstein Garage is a fab lab that came to life in May 2011 in front of a coffee machine – one of the most dangerous places in the world because it hosts conversations that can lead you to the most strange and unimaginable places and situations. Which is exactly what happened to Andrea and Paolo. Of course we discussed Make Magazines from Maker Media and in particular an issue in which Andrea found the famous cigar box guitar he built.  I was in awe: this thing works! Andrea was kind enough to make me a copy, which is now proudly sitting in my lounge waiting for me to write this post and learn to play.

What do you need to build your cigar box guitar?

  • One hardwood, knotless, long and narrow piece of wood, 102 cm x 4 cm x 2 cm for the neck (oak or maple works well)
  • 2 small pieces of wood to be used as rest for the strings – is it a bridge?
  • Strong cardboard cigar box, 4 cm, x 2 cm x 23 cm. Yes, the box has to be empty and we don’t care about the brand.
  • Nuts and bolts of different sizes to be used as tuning pegs
  • Nails
  • Elastic bands
  • Different size plastic strings
  • Glue
collage

Work in progress.
You are so lucky – you cannot hear me!

Tools needed:

  • A drill
  • Sanding paper

And now for building the guitar – it all looks very easy to me(!) –

  • drill 6 holes on the baton for the strings (3 at each end)
  • drill 3 larger holes for the bolts used as tuning pegs
  • stick one small bridge at  approximately 10 cm from the end of the neck
  • place the other bridge at the other end
  • stick the box on the back of the baton
  • lastly add the strings and make sure they are tight enough to make a lovely sound when played.

… confused! Not a problem – go to Makezine.com where you will find several pages showing how to make this guitar.

 

andrea's bookI should add that Andrea and Paolo have now written a book – Il manuale del maker. La guida pratica e completa per diventare protagonisti della nuova rivoluzione industriale (The maker’s handbook – The practical and comprehensive guide to become protagonists of the new industrial revolution). Unfortunately it is in Italian so hopefully somebody somewhere will translate it. Watch this space!

Craft Conference, Budapest, 23rd to 25th April

2014 April 3

Just got a teaser for this new conference –

 

Video streaming by Ustream

Not only will I be there with a bunch of books that will sell at 40% discount but the organizers have offered a $100 discount to the friends of O’Reilly.To get this fantastic discount register here .

Some of our authors will be there including:

Dan North: who will give a full day tutorial called Accelerated Agile: from Months to Minutes as well as giving this talk Jackstones: the journey to mastery

Chad FowlerMcDonalds, Six Sigma, and Offshore Outsourcing: Unexpected Sources of Insight

Douglas CrockfordManaging Asynchronicity with RQ

Ian RobinsonGraph Search: The Power of Connected Data

Michael NygardCooperating when the fur flies

Mitchell HashimotoVagrant, Packer, Serf: Maximum Potency DevOps

You can find a complete list of speakers here.

Budapest here I come!

Hope to see you there.

What is Quink?

2014 March 25

logo-blue-high-rezQuink is an extensible, end-user friendly in-page WYSIWYG HTML editor, designed for mobile first.

It allows developers to add rich input and self-edited areas into web pages and web apps.

You can find it on www.quink.mobi and github.

Why did it get built?

The trigger was that I could not find a good solution for editing rich content on mobile, specifically on the iPad platform – at least not one that hit all the key points I was aiming for. We (IMD Business School) have had an iPad app in production for supporting our course participants since June 2010, and I wanted to move beyond plain text annotations on PDFs, notes and plain text discussion forums with file attachments. I wanted participants in our courses to be able to create richer content for themselves, for sharing with fellow participants, and for feedback to and from the professors.

Once started down that road, there are a host of decisions to make, top of the list being:

  • data / file format
  • editing capabilities
  • separate app or in-app component
  • openness to variety of use cases, or focus on a tight scope

There is more detail on these points below, but after initial consideration, I was looking for

  • an HTML editor that works within the browser
  • with a good UI/UX for basic rich text editing,
  • that could be embedded in our app, and
  • which had a good API and plug-in architecture to allow great flexibility.

So I started looking for solutions. While there are many things that fulfill some of the requirements, I simply could not find one that would work well on the iPad, the primary target. In the end it seemed to come down to a choice between doing significant work on somebody else’s architecture, all of which were designed for the desktop browser, or starting from scratch and focusing primarily on the mobile environment and our own needs. Even that decision is not a no-brainer. But the end result was that we followed the path that led to Quink.

Architecture

I won’t go into all the alternatives considered here, but for me the decisions were reasonably clear:

Data format: HTML

For me, HTML is the only rational solution for the base format. It is the most versatile and widely used document format, has a really excellent track record of backward compatibility, and is free of proprietary control. If it seems a little strange, I guess that is just because people don’t think of it as a document format in the same category as pdf, word docs, and so on. The only thing it is really tricky to do is precise and locked-down control of presentation and layout. But in the multi-device world, I see that less as a critical feature than as a lurking problem. I would argue that the natural tendency of HTML to flow is more valuable, though more difficult to work with.

Separate app or in-app component.

The easy option would be to break out into specialist apps / editors. That makes extensibility simple, but it provides a really horrible user experience for many use cases and is simply unusable for some core requirements. Using 3rd party apps also creates all kinds of problems about cross-platform requirements, compatibility, and data management. For  something as core as document creation and viewing, we needed something we could build into our app and our web portals, and be sure it would just work.

Editing Capabilities

The minimum requirement was easy creation of ‘rich text’: some basic formatting for headings, lists, emphasis. The ability to include images. That’s really the core requirement, and covers 80% of immediate uses. But there is always the remaining 20%…

Tight focus or broad applicability

Beyond the basic capabilities, I knew from the start that there are a million things that will be required at some point: tables, graphs, vector graphics, video, audio, and just about anything you can imagine. When each of these will become important or critical is unknown – so the key requirement is to have something that is extensible in response to new demands and use cases. HTML and the web stack provide a good framework for allowing this, and for using components developed by others – both proprietary or open source. So our goal was to create an architecture where we could employ specialist content editors developed by others, out of the box. I always strive to create architectures that put as few limits as possible on the future without incurring unreasonable current costs – and I think we have achieved that.

Design – What is special?

Content divs and tagging

Probably the most significant thing about Quink is the approach to extensibility and plugins. The idea is that a page is made up of units, for each of which you may need a specialist editor. Quink exploits the HTML structure where a page is made up of a set of divs and elements. Divs provide clean boundaries for content. Some divs may be tagged to identify the editor functionality that is appropriate for editing them. If the user wishes to edit such tagged content, then the specialist editor is loaded, passed the content that it needs to edit, and the Quink core steps back to let it do its job. When it is finished, the modified content is updated in the HTML.

The base implementation of this was designed to allow the use of editors that have no knowledge of Quink. There are very, very few requirements that an editor must satisfy to allow it to be used as a plugin, and the system uses adapters so that the requirements on the underlying component are functional capabilities, not any specific API. To be eligible as a plugin, some editor need only:

  • Be loadable in a web page
  • To have some method of delivering the edited content to the Quink core so that it can be dropped into a page – ie renderable HTML.
  • In order to be re-editable, it also needs a means of having the HTML sent to it.

Being loadable includes being hosted on different servers. Quink defaults to opening plugins in iFrames, and only loading them when asked for by the user. They don’t have to be part of the root site, so you dont actually need access to source, or to own the editor component. Of course, when setting up a plugin you should trust the provider and code enough to give them access to an iframe in your page!

The mechanisms for transferring the data can include all kinds of back-end tricks if needed, though we haven’t gone down that road ourselves yet. Quink supplies the user with a button to save and exit, or simply quit the plugin – which calls a function on to the plugin’s adapter, so the plugin editor itself does not need to know it is operating inside Quink or to emit any events.

If an editor is not capable of re-editing existing content it will not break anything either, though of course it may not meet user expectations.

Adding Quink to a page & configuration

Quink bootstraps from a small script which exists mainly to set up the urls to load from and kick Require.js into action. The default bootstrap script will do the job for most installations; the target page then only needs to have a one-line inclusion of the bootstrap script, and to declare one or more divs to be contenteditable; Quink is enabled as the editor for all of them by default.

Various aspects of Quink are easily configurable. A configuration can be set up by adjusting JSON files: the plugins, toolbars, and the keymap for keyboard-driven edit functions. One of the items on the roadmap is to allow these configuration structures to be cleanly manipulated after loading.

There are also a number of things which we have found it useful to allow Quink to pull from the page query parameters: the autosave frequency and destination, the destination for an explicit POST of the content as an alternate save mechanism, whether the toolbar should pop up on page load. This approach allows referring links to change aspects of the configuration which turn out to change more frequently than it is practical to change code, and seems to be a useful pattern. In future I think this will be extended and also made more generic – and capable of being disabled.

Keyboard mapping

One of the other things which is a little unusual in WYSIWYG HTML editors, that we have included in Quink is keyboard commands. This was also driven by frustration as fairly heavy iOS users with the touch-based cursor positioning and selection. In my view this is one area where the Android UI is just miles better; but even then, trying to position the cursor and select text to replace, delete or format is slow and relatively tricky, because fine positioning is just inherently more difficult with a touch interface than with a mouse – and I am speaking as someone who has quite steady hands. From my past in mobile surveying and mapping, I know that quite a high percentage of the population have really quite shaky hands and find fine positioning on touch screens REALLY hard.

So we added an approach to allowing keyboard commands: the minimal target was simply keyboard-based navigation and selection, but the architecture delivered the ability to map a key sequence to any of the toolbar or internal commands. Because of the limitations of on-screen keyboards, we had to deliver this without control keys, and the best solution seemed to be to use a standard QWERTY key in some way. Following that line led pretty inevitably to requiring a command mode and an insert mode like Vi. This is really deeply ironic since I grew up as an emacs fan, and avoided Vi as much as I could, and now found myself forced to implement and learn vi-like sequences to achieve what I wanted.

Where we have ended up is with two ‘q’ keys in quick succession to enter command mode, and a single ‘q’ to return to normal, or ‘insert’ mode. The default map is not the same as vi, because of course many commands are more about formatting than rapid editing of plain text, but diehards can adjust to suit their tastes! To handle the limited set of keys, it is possible to set up command groups with ‘prefix’ keys, so for example we use ‘f’ for font formatting, so ‘fb’ means ‘format:toggle-bold’ and ‘fbi’  means ‘format:(toggle-bold, toggle-italic)’.

What next?

I have a long list of to-do’s.

Some should be relatively simple, such as adding a few more plug-ins; I particularly fancy the image editor Muro, not only because it seems really good, but because it is a hosted component that has the required functions, so it is also an interesting test case for the plugin architecture. After that, the next class of plugin to work on is grid/table support.

Good support of Android devices is certainly high on the list.

After that, there is some significant work to be done on div and css style management. Right now, Quink just exposes the browser behaviour for these areas, which is limited and often rather flaky. In principle this is all do-able, but doing it well with a clean architecture is an interesting challenge.

We have some other cool ideas, but they are in a phase of stealth mode experimentation just now.

Open Source

We have released Quink under the Gnu Lesser GPL. The aim is to find a good balance between maximising the usefulness and user base around Quink (by being relatively liberal), and to get help and input from the community on improvements. Our current understanding of the lesser GPL with regard to Quink is that it allows people to use Quink in their apps and sites, or write add-ons and plugins without being obliged to open source everything – thereby maximising its usefulness. However, if there are bug fixes or compatibility fixes that people find and make to the core, it is the least they can do to publish them. It would be great if people become proactive and contribute/publish plugins, plugin adapters or other significant enhancements, but that is entirely voluntary.

IainCookeTIain is CTO at IMD business school in Lausanne. He led IMD’s pioneering use of iPads in education as part of a longer term re-engineering of IMD’s operational and learning support systems.

Prior to joining IMD in 2008, Iain has worked in a variety of industries, but always at the forefront of technology development and disruptive change. In the early 90s, Iain co-founded a software start-up focused on mobile pen computing and geospatial solutions. The solution that he created enabled Ordnance Survey to be the world’s first mapping agency to use a 100% digital map collection and production system, and helped revolutionise the industry of creating and consuming geographic information.

 

 

 

Lack of women speakers at conferences

2014 March 11

A couple of weeks ago I asked User Group Leaders why there are so few women speakers at conferences. I received some comments that I would like to share with you.

First I got the statements, reasons why women do not attend/speak at conferences

  • “I’m an ecology/evolution person so our conferences are more gender balanced than elsewhere. In fact we tend towards more female speakers than male speakers in general sessions because there are more female PhD students (60:40). However, there is a big issue with symposium speakers [, …] where speakers have to be selected and invited to speak.

I’ve been organising a symposium this year in the area of computational evolutionary biology and I also run our seminar series at Trinity. I have found it very hard to get female speakers compared to male speakers. Firstly it seems that some fields genuinely do have fewer women. Secondly[,] even in fields where there are lots of women, unconscious bias leads people to suggest male speakers first (I do this too but now force myself to think of equal numbers). All this is before even inviting people! When I do ask women I find that they are much more likely to say no due to a) being busy and not wanting to travel so much b) childcare issues and c) not really seeing the benefit to their career. I also think there is a bit of impostor syndrome going on here where women think they have nothing to say, or are being invited as the “token” woman.

I’ve also found a really weird thing that happens with PhD students of theoretical or computational professors – all the male students do method development, programming etc and all the female students do empirical studies. I myself am more of a “high end methods user” than a competent programmer though I’m trying to improve! …” – Natalie, Ireland

  • “I think the cause of there being so few women presenting at conferences is systematic bias– no single person is consciously discriminating, the system as a whole is. So, for example, if people are looking for “the best” JavaScript developer to present at a conference, the systemic bias makes that person much more likely to be a man.” – Jonathan, UK
  • “From my experience as a student and as a PhD candidate, whenever I was participating together with my colleagues at different conferences or competitions I was the one doing the presentations. […] [A]s you correctly noticed there are very few women in IT Conferences. Two weeks ago I attended a conference in Barcelona and there was only one woman speaker.”  – Andreea, Romania
  • “Unfortunately we do not have girls in our user group. It seems girls and women simply are not interested in listening to our topics. I think it may be counted on the fingers of one hand the number of female presence in our pluriennal activity. If you get an answer to this question I’m the first that would like to know.” – Andrea, Italy
  • “I would love to know that too. I’m not the leader of AIRO anymore (where we had 3 women as active members, but they were not speakers), but I’m the leader of Alagoas Linux DevGroup and there [are] no [women] in this group.” – Arthur, Brazil/Romania
  • Groups (from the inside): some of us who have tried to drive IT user groups have worked on this, trying to address it both in the group and its meetings. But, essentially, we have failed. Even Scandanavia seemed no different, at least at the turn of the millennium.  This often total lack of participation contrasted with many of my earlier (70s-90s) experiences in various other groups.

I see the only way to beat the blockage would be to have minimal representation — say, 30% — of either sex; all groups, programme committees, etc. should attend to that.  That may seem harsh but hardly more so than the other aspirations of any group or event.

Is there anything to learn from foreign-language schools where often over 75% are female?” – Charles, UK/Spain

  • “It’s a pity that so few women are speaking at conferences. More often than not they have more to add than they might think.” – Daniel, The Netherlands
  • “In the 3 years of London Web Performance we’ve never had a female speaker either (although we’ve had some fantastic contributions from female members at WebPerfDays!).” – Stephen, UK
  • “In my career as developer/devops I worked in many companies, but I have to say few women were/are “hands on”, they are normally in product and project management.

Thinking about the university – I studied Computer Science – women were probably around 1 in 10, and very few ended up programming. Most of them went to teaching or doing Project Management. … The fact they feel a minority probably makes them quiet I’d say, and possibly fearing the audience? – Mauro, UK

  • “Our female turnout for meetups would be 1 in 50 maybe? I wouldn’t say how many are female in the total Meetup group (1300 registered) as it’s not tracked.

I agree that the proportion of females in sysadmin is very low, although perhaps slightly higher % of females DBA’s (maybe).

Developers the % of women is increasing, and the % in QA is quite high.“ – Stephen , UK

  • “I would think that the lack of female speakers is due to the fact that there is a lack of females in IT as a general. We are still rare. […] There have been conferences where I was the only woman attendee. Another thing as well is blogs, I personally know only of about 10 IT blogs written by women as opposed to the hundreds out there written by men. Speaking at conferences and public places is not for everybody … and requires guts and a good topic. It definitely is not the lack of knowledge, but the lack of confidence that your topic will be interesting enough […] to listen to.

 [This] is [something]I think about quite often. You cannot be asking about the lack of females in IT[:] it is a much bigger problem – where are the women in science, [boardrooms] or higher positions in the work hierarchy? There are just the few and they even feel the need to write books about it!. [So] definitely the problem is much deeper, it comes from the fact that women still feel […]  [they’re] working in a [male]-dominated area is an act of defiance while that should the most natural thing in the world, but years of staying at home to look after the household made us […] unsuitable for the new age where women are equal with men in terms of what is expected of them if not more.” Kalina, UK

  • “I feel ladies are not speaking because they are not always asked to! Yes, there may be fewer women in the industry than men, but successful and phenomenal women DO exist and they are willing to make a valid contribution!

Conference organisers should make the decision to increase the percentage of women speakers (based on merit and not gender of course!)” – Chisenga, Zambia

  • “The funny thing is that our meetup has only 3 women out of 67 members! Amazing, isn’t it? I’ve attended also some conferences last year and I noticed the same [as] you. Very few women speak at conferences. My explanation about this is that in our world (computer science, software development, etc.), the majority of the professionals are indeed men – women don’t love this kind of professions but I don’t understand why – so it’s normal to see more men. However, I strongly believe that women are much better to speak at conferences[.]They have a different speaking style and I always prefer to attend a presentation by [a] woman rather than [a] man.” – Patroklos, Greece
  • “According to me, it’s [female] culture. [They] think that [they’re] not able to do all [the] things that a man can do, including in technology fields.” – Yvan, Cameroon

 

Then I got the blogs I should be reading, the groups I should be attending  – it might be a good idea for all of us to read these blogs or attend these meetings/groups.

  •  “In the geek feminism blog you can find some posts that talk about why there are less women giving talks at conferences.

The reason why the Code of Conduct [is] so popular now in conference[s] is thanks to the work of the Ada Initiative. See https://adainitiative.org/what-we-do/conference-policies/ And http://adainitiative.org/2014/02/howto-design-a-code-of-conduct-for-your-community/” – Ana, Spain

  • […] we have launched an initiative to highlight this very issue called ‘Women in IT’ which was born from sessions held at UKOUG Apps and Tech conferences last year.

Specifically we are looking for women who will speak at Special Interest Groups […], act as a mentor to other women in IT, link with BCS’s Computers in Schools initiative , and ask women in IT to write a short piece about their experience. The key objective for this initiative is to act as a positive catalyst for change, to support and encourage women in IT.” – UK Oracle User Group, UK

  • Have you seen this paper? It discusses the issues and data.” – Natalie, Ireland

 

And now my views

Reading through these comments it feels that men are baffled by the lack of women in their groups and women do not join because they are not asked to.

Who talks more?

Contrary to popular belief, we know that women do not talk as much as men in a mixed group (see Coates, Jennifer (1993) Women, Men and Languages, 2nd Edn, London: Longman). Women’s talks are often referred to as gossip, chatter, nag, rabbit, yak and natter. You must admit that this will not encourage women to talk at User Group Meetings or conferences. Not only do we believe that women talk too much when research shows that men on average talk more than women, [this] also indicates how women and women’s activities have tended to be undervalued.

Conclusion: Women do not feel comfortable speaking to a mixed audience as they feel pressured by men through centuries of being told that their conversation is rather shallow.

Interruption and dominance

It also appears that men will interrupt a woman far more that they will interrupt another man. These findings seem to show that men act as if they have more right than women to speak in mixed-sex conversations.

Conclusion: Women are used to be interrupted – so why bother?

Use of diluting phrases

According to Lakoff (Lakoff, Robin (1975) Language and Woman’s Place, New York: Harper & Row) women use linguistic forms that dilute assertion – sort of, like, I think, kind of showing that women are less confident than man and feel nervous about asserting anything too strongly. Other studies claim that women prefer to avoid conflict and so use forms which, by being less direct, allow disagreement to take place without confrontation.

Conclusion: Women are afraid to upset their audience by stating clearly their opinion.

I found some of the comments a little too patronising – I am a woman after all. I found other comments a little too close to the truth – women do not go to meetings because they do not have the time due to childcare, housework, family care. Should we then look at our upbringing and see where our society is going wrong?

  • baby boys in blue, baby girls in pink
  •  jewellery for baby girls
  • young girls being told off for being boisterous when little boys are praised for being competitive and pushy

Our society has role cast women – is that why they do not speak at conferences?

Game time

Thank you Stephen, London, for pointing this out.

Conference Speaker Bingo: a bingo card full of excuses for not having more female speakers at STEM conferences

—-

PS. Some of my research was taken from Thomas, Linda et al, (2004) Language, Society and Power, 2nd Edn, Abingdon: Routledge

PPS. Just learn that Debian are organizing MiniDebConf Barcelona 2014 –  where everybody is invited but talks will be by women only.

Other resources

  • Tech is too important to be left to men!’, Europa, 6 March 2014 – A European Commission campaign to celebrate women in ICT and inspire young women to get involved. Includes some interesting (if perhaps worrying) statistics
  •  ‘Tokenism’, Geek Feminism Wiki – Overview of ideas of tokenism at conferences
  •  Dudman, Jane, ‘Five myths about why there aren’t more women at the top’, The Guardian, 8 March 2014 – Points 2 and 3 are particularly worth reading: point 2 mentions unconscious bias and 3 dissects the argument that able, clever women are already visible, so the women who don’t make it just haven’t tried hard enough
  •  Gahran, Amy, ‘Women at Tech Conferences: Look Beyond Tokenism (comment to Scoble)’, Contentious.com, 2 August 2005 – Another blog discussing tokenism, and dissecting the argument that there is no bias involved in choosing speakers – they are purely chosen on ‘merit’
  •  Kantor, Jodi, ‘Harvard Business School Case Study: Gender Equity’, The New York Times, 7 September 2013 [BEHIND A PAYWALL] – Like ICT, Business Studies is another field with few visible women. This article explains the changes Harvard Business School made to encourage a better learning environment for its female students – not only was there a gap in grade achievement, but female students weren’t speaking or being nominated for awards. This not only covers the method but also the result – it was extremely successful but some male students weren’t happy with the experience
  •  Urbina, Michael, ‘101 Everyday Ways for Men to Be Allies to Women’, Michael Urbina, 26 July 2013 – A lot of male commenters write about how they wish the situation was different. This is a list by a male feminist of the ways men can be more aware of the effects of patriarchy and how they can help
  •  Williams, Zoe, ‘Why female techies in the 21st century face a stone age work culture’, The Guardian, 7 March 2014 – Overview of the problem of the shortage of women in ICT, possible causes
  • Henson, Val, “HOWTO Encourage Women in Linux” – This article is 12 years old but unfortunately it could have been written yesterday