OPINION: If you follow the right people on Twitter, you’ll be aware that not everyone likes student experience surveys.

There is evidence of bias against female academics, who are sometimes judged on the basis of their dress rather than their competence, and tougher courses and rigorous grading can also be punished in the surveys.

Consequently, there are frequent calls to get rid of these imperfect surveys and replace them with proper measures of teaching effectiveness.

Like most university managers, I’d love to find a perfect measure. But I’ve come to the conclusion that we just don’t have any single indicator for assessing teaching quality yet.

So instead of giving up on student surveys, I’d like to use them more in the hope of understanding them and improving them, while the whole sector continues to develop a more extensive basket of measures that better reflect everyone’s contributions to good teaching. With future internet technologies, graduate outcomes, teaching effectiveness, reach and innovation should all be easier to measure sometime soon.

Individual staff and institutions do care about the student experience and it is impressive that satisfaction rates have increased even as the number of students has skyrocketed.

But should we be concerned that using surveys recorded to have bias against women will be against women’s interests?

My hope is that we can all become aware of that issue, and that will help us to use the data wisely. That’s partly why I’m writing this.

My bigger fear is that abandoning student surveys would be more harmful to women. They disproportionately carry the highest teaching loads, including large, first-year compulsory courses, while senior men often teach niche courses in their own research areas.

Nevertheless, without any measurement at all some quite remarkable achievements in teaching big and difficult classes would remain invisible. Consequently, some of those doing the heavy lifting in terms of vital university teaching receive little recognition and reward. Committing to student experience surveys — imperfect as they are — at least provides an opportunity for showcasing and rewarding important contributions.

Of course, it can be argued that student experience surveys don’t really measure teaching quality and that’s why they aren’t called teaching quality surveys or student evaluations of teaching. But they do represent important information.

Making everyone aware that the student experience matters to a university is no bad thing.

What’s more, I believe that most students take surveys seriously — if staff do too — and they tend to value the same things as academics — effort, organisation, and respect shown by their teachers — over more superficial things such as showmanship.

But are we at risk of survey ­fatigue?

Yes, but hopefully by the time we reach that stage we’ll know more, and statisticians will help us to sample more efficiently rather than surveying every student after every course.

And what about soft marking, grade inflation and the erosion of academic quality?

Each new generation of academics worries about the declining academic standards but at the top end, at least, I think standards are going up as the sector becomes ever more competitive.

Most importantly, by carefully separating surveys from final assessments and by being aware of grade inflation, we should be able to prevent soft marking creeping in as an effort to curry favour in student surveys.

However, I do worry that if we don’t try everything possible to measure both the student experience and ultimately teaching ­effectiveness, then these vital ­aspects of university life will end up being neglected and that other more measurable things like research outputs will increasingly take priority.

Research metrics are also highly contentious and I’m very aware of the arguments over impact factors, citation databases, H-indices, pure and applied research dollars, and research assessment exercises like Excellence in Research for Australia.

But I am also aware that these measures drive management ­decisions and most critically support investments in research.

I’d like to see student experience metrics driving investments in good teaching, too.

I’m even confident we are smart enough to use imperfect metrics because I’ve seen how such research metrics have helped stabilise and increase commitments to research funding.

There is never enough research funding to support all the great ideas and there are many problems with grant systems. But the world over — and most notably as a result of the first few Research Assessment Exercises in Britain — overall research investment has been sustained or increased partly on the basis of the argument that performance has been measured and the investment has been justified.

Andrew Norton’s recent report, 'Mapping Australian Higher Education 2016', shows that student satisfaction has been rising and continues to rise. Individual staff and institutions do care about the student experience and it is impressive that satisfaction rates have increased even as the number of students has skyrocketed.

The sector has become more efficient and it has done it through the combined efforts of high-performing staff, including excellent, teaching-focused staff, and via ­effective management.

Student experience surveys are one way of identifying and celebrating staff who make a contribution to the student experience and the more we think about and use these survey, and explore other options, the better they, and we, will become.

Professor Merlin Crossley is Deputy Vice-Chancellor (Education) at UNSW.

This opinion piece was first published in The Australian.