OPINION: There is a lot to celebrate in the results of the Excellence in Research for Australia 2012 evaluations. And everyone is talking them up, with almost no one complaining.

It's a bit like a birthday party game of pass-the-parcel. Every child wins a prize, with all the accompanying sound and colour. It seems a perfect end to the year.

But, sadly, the prizes aren't worth very much. And later on, in the grown-up world, not everyone will win. The lack of tension and absence of hard decisions made on the back of this new data make me uneasy. This isn't the way they did things in Britain, where research assessment exercises forced a concentration of excellence and drew increased government investment into key strengths.

The Australian Research Council should be congratulated for doing the job well and with a minimum of fuss. The computer systems worked and the burden on academics was not overwhelming. Introduction of a quality measure has been good for the university sector, and public scrutiny of our productivity and international standing is welcome. I doubt, however, that anyone rushed into a crisis meeting after the results came out to agonise over their ERA strategy.

Why not? Mostly because we already knew the likely outcomes, and there are too few consequences from them. For years the Higher Education Research Data Collection has gathered hard metrics on productivity, and the ARC and National Health and Medical Research Council have rock-solid data on quality. Higher education analyst Thomas Barlow also has broken down dollars spent into discipline codes. The ERA results largely mirror previous findings but the data is softer than the dollar numbers, as ERA gives out more high marks. It is easier to give marks than dollars; marks never run out.

The smoothing out of differences between institutions in the ERA process is mostly due to a flaw that should be corrected: the absence of scale. The system favours smaller players because it is relatively easy to have one or two world-class level-five researchers, but almost impossible to maintain a team of 20 operating at the highest level. One weak link disrupts the average.

Furthermore, every university will want to optimise its ERA results. One obvious way to game the system is to hide poor quality outputs that could drag down the average by scattering them across many codes that never reach the assessment threshold of 50 publications. The smaller you are, the easier this is.

It is necessary to introduce scale into the published figures of the next ERA evaluation. This may be contentious at first, but it almost certainly would be helpful to the big players, which would be recognised for their genuine strengths, as well as the smaller players wishing to build up critical mass in areas of quality.

ARC chief executive Aidan Byrne, has foreshadowed another big change: the introduction of an impact assessment. Unfortunately, impact can be hard to measure, the time lag involved makes it backward-looking and it can be difficult to apportion credit appropriately when product development has taken a long time.

But it will be a useful exercise. And if measures in addition to case studies are also carefully considered, such as engagement with industry and contract research dollars, the inclusion of impact could have a beneficial effect on behaviour.

My prediction is that the impact process also will dispel several long-lasting myths. The notion that the research-intensive universities conduct only elite, theoretical research will be smashed. A trial of the approach suggests the Group of Eight universities' proportion of impact was about the same as their proportion of research.

It will allow the humanities and social sciences to demonstrate their very great impact, which is often overlooked. Finally, the university sector will have the opportunity to celebrate its many contributions to society officially. Since the Renaissance, intellectual breakthroughs have underpinned increases in the standard of living. Yet, paradoxically, obtaining sustained investment in research remains a battle. The ARC is smart to introduce impact.

One more change would be to increase the financial consequences of the results. Britain had a brutal system in which those scoring top marks got most of the money and the mediocre got nothing. There are stories of universities that poached heavily in an attempt to attain a 5* rating and then, when they just missed out, closed departments and sent everyone packing, including the star recruits.

We don't want this to happen here but we do need some consequences to the data. The simple introduction of scale linked to dollars should be sufficient to support excellence wherever it occurs and to drive differentiation between universities.

Professor Merlin Crossley is Dean of the Faculty of Science at UNSW.

This opinion piece was first published in The Australian.