91
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      If you have found this article useful and you think it is important that researchers across the world have access, please consider donating, to ensure that this valuable collection remains Open Access.

      Prometheus is published by Pluto Journals, an Open Access publisher. This means that everyone has free and unlimited access to the full-text of all articles from our international collection of social science journalsFurthermore Pluto Journals authors don’t pay article processing charges (APCs).

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      SELECTIVITY IN FUNDING: EVALUATION OF RESEARCH IN AUSTRALIA

      Published
      research-article
      Bookmark

            Abstract

            Selectivity and concentration in research funding has become unavoidable in Australia today. Some of the reasons for this are reviewed relative to international economic, scientific/technical and conceptual developments. The resulting need to develop an evaluative culture in and for Australia is discussed. The reasons for undertaking evaluations are outlined, and a working definition of ‘research evaluation’ that may be suitable within the Australian context is developed. The parameters that may deserve consideration in designing an evaluation are detailed, and a series of conceptual and practical guidelines are forwarded. Several barriers to implementing evaluations that may apply to Australia are addressed. Finally, the implications of the concept ‘accountability’ for both the recipients of government support and government itself are briefly raised.

            Content

            Author and article information

            Journal
            cpro20
            CPRO
            Prometheus
            Critical Studies in Innovation
            Pluto Journals
            0810-9028
            1470-1030
            June 1988
            : 6
            : 1
            : 34-60
            Affiliations
            Article
            8631838 Prometheus, Vol. 6, No. 1, 1988: pp. 34–60
            10.1080/08109028808631838
            ed83f8a8-90b2-4194-b59d-87c30523433e
            Copyright Taylor & Francis Group, LLC

            All content is freely available without charge to users or their institutions. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior permission of the publisher or the author. Articles published in the journal are distributed under a http://creativecommons.org/licenses/by/4.0/.

            History
            Page count
            Figures: 0, Tables: 0, References: 89, Pages: 27
            Categories
            Original Articles

            Computer science,Arts,Social & Behavioral Sciences,Law,History,Economics
            S&T policy-decision-making,R&D management,Australian research funding policy,research evaluation

            NOTES AND REFERENCES

            1. “J. S. Dawkins, “The challenge for higher education in Australia”, a speech by the Minister for Employment, Education and Training, Canberra: Department of Education, Employment and Training, 22 September, 1987, pp. 11–12.

            2. Commonwealth Tertiary Education Commission, Review of Efficiency and Effectiveness, Canberra: CTEC, October, 1986.

            3. Australian Science and Technology Council, Improving the Research Performance of Australia's Universities and Other Higher Education Institutions, A report to the Prime Minister, Canberra: ASTEC, February, 1987.

            4. “Department of Employment, Education and Training, “Government moves quickly to establish Australian Research Council”, A public bulletin, Canberra: DEET, 23 September, 1987.

            5. UGC, A Strategy for Higher Education Into the 1990's: Criteria for Rationalisation, London: HMSO, University Grants Committee, 1984.

            6. S.S. Blume et. al., Evaluation of Research: Experiences and Perspectives in the Netherlands, Report on a study commissioned by the OECD Directorate for Science Policy, Ad Hoc Group on University Research, Paris: OECD, 1985.

            7. Office of Technology Assessment, Research Funding as an Investment: Can We Measure the Returns?, A Technical Memorandum, Washington, D.C.: Congress of the United States, April 1986.

            8. M. Gibbons, Evaluation of Research: Evaluation of Research in Sweden, Report on a study commissioned by the OECD Directorate for Science Policy, Ad Hoc Group on University Research, Paris: OECD, 1985.

            9. Australian Science and Technology Council, Future Directions for CSIRO, A report to the Prime Minister, Canberra: ASTEC, November, 1985.

            10. See for example: Jane Ford, “Govt research drive disappoints,” Financial Review, July 17, 1987, p. 53. See also: “ABS survey shows industry R&D growth”, in Laboratory News, July, 1986.

            11. Michael Gibbons and L. Georghiou, Evaluation of Research, A Selection of Current Practices, A report prepared for the Secretary-General of the OECD, Paris: 1987, p. 58.

            12. For further discussion see: M. G. Taylor, “Evaluation of research and resource allocation”, International Journal of Institutional Management in Higher Education, 9, 1, March 1985, p. 89.

            13. The university referred to is the University of Wollongong.

            14. Paul Bourke, Quality Measures in Universities, A study commissioned by the Commonwealth Tertiary Education Commission, Canberra, Australia: CTEC, 1986, p.20.

            15. Some observations made here about changes in the nature and perception of ‘science’ apply more to the Physical and Biological sciences than to the Arts and Humanities. The Physical sciences have generally served as the model around which the theories relevant to this paper were developed. The cultural and intellectual role of the Arts and Humanities may be quite different from that of the Physical/Biological sciences and, therefore, deserves separate consideration. This was not possible in this paper.

            16. C. Ganz Brown, “The technological relevance of basic research,” in B. Bartocha, et al. (eds), Transforming Scientific Ideas into Innovations: Science Policies in the United States and Japan, Tokyo: Japan Society for the Promotion of Science, 1985, pp. 113–134.

            17. F. Narin and E. Noma, “Is technology becoming science?”, Scientometrics, 7, 3-6, 1985, pp. 369–381.

            18. Gibbons et al., op. cit., note 11, p.14.

            19. R. Johnston, “Why scientists don't get more money,” Metascience, 3, 1985, p. 46.

            20. Department of Science, Science and Technology Statement 1985-86, Tables 5 and 18, Canberra: DoS, November 1986, pp. 14, 88.

            21. Department of Science, Submission to ASTEC Review of Higher Education Research Funding, Tables 1, 6 and 10, Canberra: DoS, 1986.

            22. Australian Science and Technology Council, Improving the Research Performance of Australia's Universities and other Higher Education Institutions, Canberra: ASTEC, February 1987, p. 19.

            23. P. S. Chen, “Evaluation in biomedical research at the National Institutes of Health”, in G Goggio and E. Spachis-Papasois (eds), Evaluation of Research and Development, Proceedings of the European Community Seminar, Brussels, October 17-18, 1983, Dordrecht, Netherlands: D. Reidel, 1984, p. 115.

            24. See for example: John Ziman and Peter Healey, International Selectivity in Science, A working paper from the Science Policy Support Group, London, SPSG, 1987.

            25. Johnston, op. cit., note 19, p. 49.

            26. The reader is referred to the literature of the sociology and history of science for more detail. The following provide an introduction to a large and varied literature: K.D. Knorr-Cetina and M. Mulkay (eds), Science Observed, London: Sage, 1983; Steven Shapen and S. Schaffer, Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life, Princeton: Princeton University Press, 1985; Bruno Latour and S. Woolgar, Laboratory Life: The Construction of Scientific Facts, Princeton: Princeton University Press, 1979.

            27. S. E. Cozzens, “Expert review in evaluating programs”, Science and Public Policy, 14, 2, April 1987, pp. 71–81.

            28. S. Cole, J.R. Cole and G.A. Simon, “Chance and consensus in peer review”, Science, 214, 20 November 1981, pp. 881–86.

            29. A.L. Porter and E.A. Rossini, “Peer review of interdisciplinary research proposals”, Science, Technology & Human Values, 10, 3, Summer 1985, pp. 34–38.

            30. B.R. Martin and J. Irvine, “Assessing basic research”, Research Policy, 12, 1983, p. 72.

            31. Gibbons et al., op. cit., note 11, p. 27.

            32. Gibbons et al., op. cit., note 11, p. 26.

            33. ibid., p. 10.

            34. ibid., p. 46.

            35. ibid., p. 57.

            36. Thomas E. Clarke, The Evaluation of R&D Programs and Personnel: A Literature Review, Ottawa, Ontario, Canada: Stargate Consultants Ltd., December 1986, p. 56.

            37. Most of the writing about research evaluation is targeted more for the Physical and Biological sciences than for Arts and Humanities. However, this does not mean that Arts and Humanities research cannot be systematically evaluated in the general way suggested here, only that further consideration than has generally been given is required as to the specific nature of research performance in those fields.

            38. Gibbons et al., op. cit., note 11, p. 16.

            39. P. Fasella, “The evaluation of the European Community's research and development programmes”, in G. Goggio and E. Spachis-Papasois (eds), op. cit., p. 5.

            40. J. Irvine and B. Martin, Foresight in Science: Picking the Winners, London: Frances Pinter, 1984, p. 141.

            41. Bourke, op. cit., note 14, p. 15.

            42. One example is Lewis Branscomb, “Industry evaluation of research quality: edited excerpts from a seminar”, Science, Technology & Human Values, 1, 39, Spring 1982, pp. 15–22.

            43. J. A. Snow “Research and development: programs and priorities in a United States mission agency”, in G. Goggio and E. Spachis-Papasois (eds), op. cit., p. 95.

            44. O. T. Fundingsland, “Perspectives on evaluating federally sponsored research and development in the United States”, in G. Goggio and E. Spachis-Papasois (eds), op. cit., p. 100.

            45. Daryl E. Chubin, “Designing research program evaluations: a science studies approach”, Science and Public Policy, 14, 2, April 1987, p. 82.

            46. ibid., p. 88.

            47. The definition of ‘research evaluation’ is partially derived from V. Stolte-Heiskanen, “Evaluation of scientific performance on the periphery”, Science and Public Policy, 13, 2, April 1986, p. 85.

            48. Gibbons, et al., op. cit., note 11, p. 19.

            49. Gibbons, et al., op. cit., note 11, p. 21.

            50. Bourke, op. cit., note 14, p. 23.

            51. Fasella, op. cit., note 37, p. 5.

            52. Gibbons, et al., op. cit., note 11, p. 46.

            53. Fundingsland, op. cit., note 44, pp. 109–11.

            54. J. Irvine, B. Martin and G. Oldham, Research Evaluation in British Science: A SPRU Review, A paper commissioned by the Centre de Prospective et d'Evaluation, Ministère de la Recherche et de l'Industrie, Paris, France, Sussex: University of Sussex, SPRU, April, 1983, p. 5.

            55. For an introduction to the Delphi method see H. Sackman, Delphi Assessment, Expert Opinion, Forecasting, and Group Process, US: Rand Corporation, 1974, as discussed in A.L. Porter et al., A Guidebook for Technology Assessment and Impact Analysis, New York: North-Holland, 1980, p. 126.

            56. J. D. Roessner, “The multiple functions of formal aids to decision-making in public agencies”, IEEE Transactions on Engineering Management, 1985.

            57. One aspect of NIH evaluation activities is exemplified by Francis Narin, Subjective vs. Bibliomeiric Assessment of Biomedical Research Publications, A US National Institutes of Health program evaluation report, Bethesda, MD: US Departent of Health and Human Services, April, 1983.

            58. IDEA Corporation, A Comparison of Scientific Research Excellence at Selected Universities in Ontario, Quebec and the United States, 1982, A technical background paper for The Commission on the Future Development of the Universities of Ontario, Ontario: IDEA Corporation, September, 1984.

            59. Blume, op. cit., note 6, p. 10.

            60. One of the many examples of US NSF investigations: M. P. Carpenter, Updating and Maintaining Thirteen Bibliometric Data Series Through 1982, A final report to the US National Science Foundation, Science Indicators Unit, New Jersey: Computer Horizons, 19 November, 1985.

            61. H.R. Coward, J.J. Franklin and L. Simon, ABRC Science Policy Study: Co-Citation Bibliometric Models, Final report to the Advisory Board to the Reseaarch Councils of the United Kingdom, Philadelphia: Center for Research Planning, July, 1984.

            62. Royal Society Policy Studies Unit, Evaluation of National Performance in Basic Research — A review of techniques for evaluating performance in basic science, with case studies in genetics and solid state physics, ABRC Science Policy Studies, No. 1 performed for the Economic and Social Research Council, London: Department of Education and Science, 1986.

            63. H.F. Moed, W.J.M. Burger, J.G. Frankfort, A.F.J, van Raan, “The use of bibliometric data for the measurement of university research performance”, Research Policy, 14, 1985, pp. 131–149.

            64. J.J. Franklin, H.R. Coward, and L. Simon, Identifying Areas of Swedish Research Strength: A Comparison of Bibliometric Models and Peer Review Evaluations in Two Fields of Science, Final report to the National Swedish Board for Technical Development, Philadelphia: Center for Research Planning, 23 April, 1986.

            65. Referred to in B.R. Martin and J. Irvine, Final Report on the Three-Year SPRU Programme on Research Evaluation by the Leverhulme Trust, Sussex: University of Sussex, SPRU, November 1986, p. 17.

            66. H.F. Moed, W.J.M. Burger, J.G. Frankfort, A.F.J, van Raan, On the Measurement of Research Performance. The Use of Bibliometric Indicators, Leiden, Netherlands: The University of Leiden, 1983.

            67. F. Narin, Measuring the Research Productivity of Higher Education Institutions Using Bibliometric Techniques, Report to the OECD Workshop on Science and Technology Indicators in the Higher Education Sector, 10-13 June, 1985, Paris: OECD, 20 May, 1985.

            68. Raphael Gillett, “No way to assess research”, New Scientist, 30 July, 1987, pp. 59–60.

            69. Martin et al., op. cit., note 30.

            70. J.J. Franklin and R. Johnston, “Co-citation bibliometric modeling as a tool for S&T policy and R&D management: issues, applications, and developments”, forthcoming in A.F.J, van Raan (ed.), Handbook of the Quantitative Study of Science and Technology, Amsterdam: Elsevier, 1987-88.

            71. M. Callon, S. Bauin, J-P. Courtial and W. Turner, “From translation to problematic networks: an introduction to co-word analysis”, Social Science Information, 22, 1983, pp. 191–235.

            72. L. A. Myers, “Information systems in research and development: the technology gatekeeper reconsidered”, R&D Management, 14, 4, 1984, pp. 199–206.

            73. N. Cooray, “Knowledge accumulation and technological advance”, Research Policy, 14, 1985, pp. 83–95.

            74. S. Ghoshal and S.K. Kim, “Building effective intelligence systems for competitive advantage”, Sloan Management Review, 49, Fall 1986, pp. 49–58.

            75. Gibbons et al., op. cit., note 11, p. 10.

            76. Johnston, op. cit., note 19, p. 52.

            77. M. Gibbons and L. Georghiou, Evaluation of Research: Evaluation of Research and Development in the United Kingdom, Report on a study commissiond by the OECD Directorate for Science Policy, Ad Hoc Group on University Research, Paris: OECD, 1985, p. 19.

            78. For related discussion see Ken Green, “Research funding in Australia: a view from the North”, Prometheus, 1, June 1986, p. 85.

            79. Stephen Hill, “From dark to light: seeing development strategies through the eyes of S&T indicators”, Science and Public Policy, 13, 5, October 1986, pp. 275–84.

            80. Gibbons et al., op. cit., note 11, p. 24.

            81. Gibbons et al., op. cit., note 77, p. 31.

            82. Blume et al., op. cit., note 6.

            83. Chen, op. cit., note 23.

            84. Fasella, op. cit., note 39, p. 5.

            85. The only one of these subdisciplines that has perhaps not been referenced here is social evaluation research. See L. Rutman and G. Mowbray, Understanding Program Evaluation, Beverly Hills: Sage, 1983 or Marvin C. Alkin, A Guide For Evaluation Decision Makers, Beverly Hills: Sage, 1985.

            86. Hugh Preston, “The new Australian Research Council — its objectives, structure and implications”, A speech given by the Assistant Secretary of the Research Grants Branch, DEET, University of Wollongong, 14 October, 1987.

            87. Terry Hillsberg, an untitled speech given at the conference “Innovation Outlook ‘87” by the First Assistant Secretary of the Technology and Business Efficiency Division, DITAC, Sydney, 17-18 September, 1987.

            88. J. Ronayne, The Allocation of Resources to Research and Development: A Review of Policies and Procedures, A report to the Australian Science and Technology Council, Canberra: ASTEC, 1980, p. iv.

            89. For discussion see Bourke, op. cit., note 14, pp. 4–5.

            Comments

            Comment on this article