Why academia isn’t delivering on applied research

Recently I have found myself having several separate discussions about a need for applied research on social media and new technologies.  In light of Digital Britain debates about creative industries and the need to invest in British businesses in the CI space, I’ve been involved in a number of conversations about the difficulties with connecting up user practices, quality of interaction and business application of digital media.  And industry people and researchers seem all to be in violent agreement that business and government alike are lacking sufficient research to support widespread adoption of social media.  But while there may well be a need for applied research, there isn’t active support from the academic community to deliver it.  The reasons for this are quite complex, but it all comes down to how the academic sector values applied research when compared with ‘pure’ or ‘basic’ research.

I think it’s appropriate that this post focuses on these reasons and on the problems of the academic sector generally.  I’m not saying this is the main reason why I resigned from academia (after all, I am technically an Adjunct Associate Professor at QUT, and thus still embedded in academia), but the problems with the funding and recognition of applied research is one of the reasons why I felt motivated to go out on my own and focus on some of the research and strategy development that academia was failing to deliver.

First, some definitions:
– Basic research: empirical research which explores new concepts or hypotheses, and tests their integrity.  In such analysis, research questions must be seen to be thorough, and deadlines are secondary to the (commercial or contextual) needs of stakeholders in the research outcomes.
– Applied research: industry-driven, or situationally specific research which does not concern itself with whether the research questions being posed are the only questions that relate to the circumstances of the research, and which is typically subject to short deadlines and commercial outcomes. Applied research is not commercial market research, both because the questions being posed and methods used are not commercially viable research practices, and because research is not necessarily using standard market research practices, but rather testing new engagement techniques.  Applied research results can thus act as a springboard for more extensive basic research.

So: given these definitions, and the obvious benefits to academia in engaging with applied research, why is it that academic institutions are so thoroughly bad in delivering applied research?

A lot of this rests with perception of ‘value’.  This goes back to those very old economic arguments about value and utility, and the idea of direct revenue generation versus resource enrichment, and process improvements. Interestingly from an academic perspective, this is also about ‘knowledge creation’ and the value proposition embedded in ‘ownership’ of knowledge.  It has been the practice of the academic community to generate its own funds for research out of its publication of articles and collection of copyright dues for same. Thus in a very real way, the academic community measures its success by the production of new knowledge in the form of publications and in terms of the generation of income based on those publications.  Yet at the same time, academia is generally resistant to being measured in terms of applied research outcomes and the generation of non-traditional outputs; everything from tools and services to platforms for creative engagement – essentially anything that isn’t a peer reviewed journal article, conference paper or book.

The rise of the ‘creative industries’ disciplines has been instructive in tracing a shift from traditional measurement techniques as they are applied to academic research.  Creative Industries as an academic discipline grew from the recognition in economic debates that creative industres themselves were assessed to account for around 4% of world GDP – substantially greater than had previously been considered.  Traditionally considered ‘soft industries’, being associated primarily with content production and communication, creative industries were regarded as vocational, and academic engagement with creative industries was limited to critical assessment, evaluation and documentation of creative production techniques.  Then as the true scope of the economic impact of creative industries was acknowledged, Creative Industries (with a capital C and I) as an academic discipline was born, and it focused as much on creative production assessment as with the act of creativity, and the production of outputs that facilitate creativity and creative production.  Simultaneously, the development of rapid prototyping research and production, and the focus on innovation as a means of improving business practice was advanced to a point where it became natural to consider evaluation techniques in terms of innovation assessment models, such as Six Sigma, used extensively in the business sector and famously by 3M (see http://en.wikipedia.org/wiki/Six_sigma).

Of course, academic departments (particularly Arts and Humanities disciplines) were appalled at the prospect of being assessed in terms of outputs, process improvements and optimisations, production design and pilot programmes.  But if it were not for the fact that these measurement techniques were applied in addition to existing assessment methods (based on publications, research income and teaching), then they may have been more successfully applied. Because Six Sigma grew from the manufacturing sector, its focus is on total quality management and elimination of defects in processing data, and it was thus seen as antithetical to basic Arts and Humanities research because it encourages focusing on practical and verifiable data, not on exploration of ideas. Further, as these Six Sigma-esque measurement techniques  were implemented for Arts and Humanities projects, they bureaucratised research practice to the point where the ‘big picture’ of research was deemed by some as ‘lost’ to process improvements and reporting.  Given growing pressure on academics to justify their economic value within the university structure (by taking on larger teaching and assessment loads, and/or generating higher income in research output terms), these process assessments have been treated with little less than contempt by faculty from the traditional Arts and Humanities disciplines.

But the problem is not the measurement techniques themselves.  Six Sigma is reasonable enough, though perhaps a little too arbitrary in its application, and considering it was designed for the manufacturing process, one might well argue that it is inherently flawed as a mechanism for improving research quality, because it denies the possibility of exploring concepts that cannot easily be quantified. (It’s also rather dated now as a methodology for assessing innovation, but that’s almost beside the point.)  The major problem is that production of traditional, academic peer-reviewed research is still encouraged and – in all senses of the word – ‘valued’ over the product and process outputs of applied research projects.  This is all best demonstrated with a hypothetical scenario.

If a research project is designed to investigate the role of social media in terms of impacts on political engagement, then traditional (basic research) assessment of the value of the project would be in terms of its uniqueness, its social importance, and the quality of the proposal in terms of credentials of research staff and adequate review of the idea.  But in addition to these criteria, newer Six Sigma-style assessment would also investigate the clarity of the questions being posed, the capacity to quantify and measure the variables in the research, and the efforts to minimise the time/resource allocation committed to the project to ensure maximum return on financial investment.  This applied research poject assessment would then abide by the famous Six Sigma output – 3.4 defective parts per million opportunities (99.99976% efficiency).  So projects are thus being assessed in a manner that  could be deemed ‘more about process than product’.  And where process is the focus of reporting, the production of traditional academic and peer-reviewed articles is interrupted, and the generative impact of research findings is limited by the quantifiability of variables considered in the research project.

Because the focus of academia is always on research publications, and not on considering the actual project itself, then research process optimisation is considered intrusive and inhibiting.  But let’s go back to the research project: if we’re considering the role of social media in terms of its impact on political engagement, then the research outputs are actually the methodologies in which social media can be deployed to either maximise or minimise political engagement.  This isn’t about quantity and scope of peer reviewed articles. It’s about impact and influence of social media as it applies to political process and democratic engagement (or lack thereof).  Thus the outputs of the research project should not be measured in terms of the quantity of peer reviewed research, but on the comparative improvements or new resources identified in the research programme for encouraging or limiting political engagement.  That’s what the research project was about.  That’s what it was intended to do.

What’s important here is that the process improvements are actually measuring what is defined in economic terms as ‘value’ (ie: not ‘utility’).  This is not about describing the scope and range of potential impacts of social media on political engagement, but rather on the most efficient means of either encouraging or limiting political engagement.  Thus it is not about all the items in the set, it is about the most efficient items in the set.  All items might be able to facilitate political engagement (utility), but we assume that some have greater efficiency in meeting this agenda, than others (value).  And the research programme that investigates the impact of X over Y, automatically assesses the effectiveness of all iterations of X.  Where some iterations of X are more effective than others, then a natural value is derived.   So by focusing more on the research itself (rather on than on the quantity of publications arising from the research programme), you might well argue that the application of process improvement-oriented assessment criteria is absolutely appropriate.

Thus measuring the research programme in terms of identifying resources, and measuring their efficiency is actually a reasonable assessment method.  But because financial recognition in university research budgets is based ONLY on quantity and quality of peer-reviewed article preparation, the outputs of the research in these measurable, productive terms is actually irrelevant.  So in real terms, the creative/productive value of the research is undermined by the financial value of the research programme publications.

Until this aspect of academic practice is reformed, no assessment criteria for creative industries research will generate genuine value for business, for stakeholders, for government policy, and for society.

Be Sociable, Share!
This entry was posted in academic and tagged , , , , . Bookmark the permalink.