Monthly Archives: March 2013

Outputs of FP7 Future Networks Projects

I started taking a bigger picture view of FP7 projects mainly with the objective of trying to understand what the outputs look like. To date, I’ve been focused on the inputs in terms of money and, to a lesser extent, effort. Here, I start the process of trying to understand what the outputs look like. (Note I’m still limiting scope to projects in the Future Networks area – even with this limited scope, analysis is not easy).

There is no single final presentation or report which is readily available for each project, highlighting the project’s achievements. This is a clear omission and should be rectified. While each project has a factsheet and presentation (see here), in many cases, these do not reflect the outputs of the project; indeed, in quite a few cases, it seems that these are more based on the project proposal and could have been done at the start of the project, rather than the end.

There are, of course, the project deliverables. While these are an important and necessary part of the process, they are very inaccessible to parties who are neither involved in the project nor working in a very closely related area. Also, the project deliverables are defined in the project proposal and they are typically not structured such that there is one final deliverable which captures all the key outputs of the project. Hence, it was not possible to use these as the basis for understanding project achievements.

There is a bit of diversity in the nature of the projects and this does impact the types of outputs different project have. For example, some projects clearly have a more development and prototyping focus and have little emphasis on classical research publications; others are very focused on publications or technology roadmaps and others still have quite a bit of Open Source software as outputs. This does make it difficult to perform a more holistic evaluation and indeed, it probably does not make sense to evaluate all projects in the same way.

The use of quite standard social media communications tools is very limited within this arena. This is probably improving with the set of projects which are now kicking off, but it surely must affect the project’s ability to have impact. It’s also interesting to note that these social media tools are very transparent – number of twitter followers is very visible, number of facebook likes is visible, numbers of viewers of Youtube channels are visible. This is an important and interesting point in terms of understanding how much impact these projects are having and will certainly help in future evaluations of project impact.

My data processing this time was quite limited (and much less effort than the last few posts). This time around, I wanted to make a list which somehow reflects outputs of the projects. As there was no final report, I opted to form a consolidated set of links to publications for each of the projects. I have ideas about putting these into Mendeley lists, but that could be more work than I’m willing to put into this activity.

I will do a little further processing on this to understand how many publications there are, how many are having impact etc, perhaps for the larger projects.

Spreadsheet here.


Analysis of Organizations involved in FP7 Future Networks

Following my previous two posts on the basic analysis of the projects and some analysis of project co-ordinators, I did a bit more delving into partners in these projects.

It took quite a bit of work and involved reorganizing the data significantly. I went through all of the projects funded in this space so far, elicited each of the partners from the public documents, put them into the spreadsheets and classified them by organization type (Enterprise, Public Body, Higher Education or Non Profit Research Organization) and country.

Link to spreadsheet.

The data does have some imperfections – I’m sure there are inconsistencies between how I classified partners in projects, eg I was not familiar with the organization and I may have classified it as Higher Education in one project and Non Profit Research Organization (NPRO) in another (there are a few of these organizations that are closely linked to universities, but it’s not clear if they are independent or not).

I may get around to doing a further pass on the data in an attempt to clean it up a bit, but at present, I think it’s good enough to perform a meaningful analysis: cleaning it up won’t significantly change the findings.

So, what can we learn from this data? I include the most interesting points here:

  • Just under 50% of the organizations involved in the Future Networks areas are Enterprises and the remainder are a mix of Public Bodies, Higher Education Institutions and NPROs;
  • Higher Education institutes account for almost 40% of the total;
  • Germany and France had the highest number of participants with 15.4% and 14.8% respectively, although the overall participation rates differ significantly from the breakdown of co-ordinators. This was more or less expected as they lead the EU in terms of both population and GDP. However, the percentages are a bit less than DE and FR in terms of percentage EU GDP contribution and population percentage;
  • Italy and UK featured significantly lower with 8.1% and 8.8% respectively – this reflected a lower involvement in the programme than would be expected from these countries given their populations and contribution to GDP;
  • At 9.45%, Spain was more or less where it would be expected;
  • There was relatively low engagement from the Eastern European countries, as would be expected;
  • As with the analysis of the co-ordinators, Greece featured quite well with 5.4% of the participants.

What does the above tell us? It does raise questions relating to what the expected mix of participants under such a programme should be. As it is supposed to be a risky research activity (with funding rates consistent with this), it would be expected that universities and NPROs feature; however, if the objective is to ultimately map the funding to commercial activity and jobs, then enterprises must engage seriously. I think the numbers presented above do not show fundamental problems with the operation of the system in terms of organization breakdown at the highest level, although the data above does not address the issue of SME involvement.

It’s probably necessary to do a further breakdown of organizations by country – I suspect this might highlight some countries in which the primary participation is through Higher Education institutes or NPROs; if this is the case, it could validly be asked whether the funds are really having any impact in those countries relating to commercial activity.

I hope to do the above breakdown next and then I’ll shift the focus a little to the outputs of the projects.

A little more FP7 Open Data…

Continuing my work of the previous blog post, I’ve added a little more raw data to the basic spreadsheet containing information on EU funded research projects on Future Networks. The additions include the call that the project was funded under, more accurate financials, information on the coordinators and some more info on project durations.

I also did a little more analysis of the data set, although I seemed to spend more time trying to figure out how to do things in Google Spreadsheet than actually making any insights from the data.

The data is in the embedded spreadsheet below:

and is available here – feel free to copy and work with it.

Some general observations from the data:

  • Call 1 had a larger budget, but if you sum the total budget for Call 4 and Call 5, it comes to about the Call 1 budget – this is not news to anyone playing in the space;
  • The lion’s share of the budget was allocated to STREPs and IPs, although there was a large allocation to NOEs in Call 1;
  • In Call 1 and Call 4 IPs got a larger allocation than STREPs but in Call 5 this was not the case – this is probably because some of the priorities involved development of radio or optical technologies which can be very costly and requires significant input from a substantial set of partners.

I started some partner analysis – for now, I’ve just included info on the number of partners and more detail on the co-ordinator. In my experience, the role of the co-ordinator is very important and understanding their motivations is important in understanding how the system works. Also, getting info on the co-ordinator was a little easier than digging up more info on all the partners.

Analysis of the co-ordinators does yield some interesting observations:

  • France was most successful in terms of projects co-ordinated, with just over 25% of the projects co-ordinated by French organizations;
  • Interestingly, many of the French co-ordinators were industrials, although it is worth noting that some of these industrial are some kind of consultants that simply co-ordinate projects and have little skin in the game so to speak;
  • Germany was next – I expected to see a similar pattern in German participation, but in fact German academic institutions led more projects than their industrial compatriots;
  • Overall, most of the projects were led by industrials (54%), but again some of these were consultant types – academics led 37% and public bodies, eg research or standardization institutes led 9% of the projects.
  • Spain and Italy had reasonable representation, although significantly behind France and Germany;
  • Greece seemed to be punching considerably above its weight, co-ordinating 5 projects, although it’s worth noting that these were all led by Greek academics.

Next, I plan to do a little more analysis of the partners…