MODERN ASSESSMENT METHOD OF RESEARCH OUTCOME METHODOLOGICAL SIGNIFICANCE

Olga Popova1+ Dmitry Romanov2 Marina Evseeva3

1,2Department of Information Systems and Programming at the Kuban State Technological University, Krasnodar, Russia

3Department of Applied Linguistics and New Information Technologies at the Kuban State University, Krasnodar, Russia

ABSTRACT

This paper presents multi-parameter qualimetric assessment models and methods of methodological significance of the individual and collective research outcomes. The authors have substantiated that no scientometric parameters based on citation rating can fully reflect the significance of the individual and collective research outcomes. The authors believe that only those research outcomes can be deemed significant that can be applied in the educational process, namely, in the teaching content. In other words, the results of research activities (both collective and individual) should have methodological significance, i.e. be suitable for teaching content. Another point of importance highlighted by the authors is in the fact that in contrast to scientometric parameters based on citation, methodological significance of the scientific outcomes of a research group shows not only their social activity (influence on the scientific community), but the interrelation between science and education whose role is steadily increasing in the modern world. Practical significance of this study is in possibility of applying proposed criteria of the methodological significance in monitoring systems for research activities of scientific institutions (including higher educational institutions). Its theoretical significance is in further comprehension of issues connected with research efficiency boost among scientists, i.e. in its importance for the sociology of sciences. Methods: review of scientific and methodological literature, management practices in scientific and educational institutions, mathematical modeling, methods of set theory, qualimetry methods, and methods of mathematical statistics. The empirical stage of the study was carried out on the basis of higher educational institutions in the Krasnodar Territory. At this stage, the authors’ assessment method for determining methodological significance of lecturing researchers’ outcomes was being tested. The factual information about the outcomes of research activities was obtained from the leading scientometric system in Russia - the Russian Science Citation Index.

Keywords: Research activity, Outcomes, Criterion, Productivity, Methodical significance, Assessment.

Article History: Received: 27April 2017, Revised: 25May 2017, Accepted: 29May 2017, Published: 30May 2017

Contribution/ Originality: This study uses new multi-parameter qualimetric assessment models and methods of methodological significance of the individual and collective research outcomes. The paper's primary contribution is finding that possibility of applying proposed criteria of the methodological significance in monitoring systems for research activities of scientific institutions.

1. INTRODUCTION

Science is well known to be a social institution and a social and cultural phenomenon; its role in the civilization development cannot be overestimated [1-11]. In the recent decades, science has turned into the ‘industry of new knowledge’, which necessitated theoretical understanding of this trend. Sociology of science (in general) and Scientometrics (in particular) have become independent scientific directions. The introduction of scientometric indicators is aimed at encouraging both researchers and research groups to systematic and efficient activities [2, 8, 12]. The most important functions of science are production (generation) of new knowledge and its broadcasting (dissemination and implementing), providing conditions for an innovative development of society in general and specific areas of human activity [1, 3-5, 13]. Unfortunately, the expansion of ‘researchers’ army’ does not necessarily mean the increase of research level and quality (significance and novelty) of its outcomes [5].

Scientific novelty (innovations for technologies), theoretical and practical significance are known to be the most important aspects of research outcomes. Both evaluation and proof of the scientific novelty of research outcomes are highly difficult tasks, whose settlement is impossible without their (research outcomes) being discussed extensively by the scientific community. Although new technologies and systems (widely used Antiplagiarismprogramme) open up new possibilities in the evaluation of novelty of research outcomes and researching authors’ roles in obtaining them, this problem cannot be solved satisfactorily: computer systems perform only formal, not the semantic (meanings) verification (the more so, in modern conditions electronic systems frauds are possible, e.g. rewriting technique). Another trend is becoming increasingly common – independent obtaining similar results (with ensuing publications in different journals) by separate researchers and research groups. In other words, modern science still cannot sport synergy effect at all times: isolated researchers or research groups may produce similar results.

Another challenging issue to address is the evaluation of practical significance of research outcomes. The complexity of this evaluation is caused by a number of reasons. Firstly, not all results of research activities (both by separate researchers and whole groups) get straightforward implementation in practice (science can ‘outstrip times’). Secondly, it is not always possible to trace back to those theoretical findings that gave birth to this or that technology. As a reminder, the technology is a link between theory and practice. Thirdly, it is not always possible to assess the significance and scope of application (dissemination) of a particular technology or technique. For instance, it is impossible (not even difficult!) to imagine the modern world without electricity. However, such an important sphere as power industry is obliged merely to the law of electromagnetic induction (by M. Faraday).

The simplest way of evaluation is applied to the theoretical significance of research outcomes. The given evaluation is easily formalized and, henceforth, easily implemented by computers. The modern digital technologies (especially, database and network technologies) provide all means for solving scientometric problems (assessment of bibliometric indicators). Suffice it to remind of such science measuring systems as the Russian science citation index, Web of Science, Scopus, Agris, DOAJ and others.

Many bibliometric indicators, reflecting theoretical significance of research outcomes (publications, to be exact) are based on citation. The major merit of these indicators is their objectivity. Researchers (lecturers included) can prove their qualifications (unfortunately, biased attitudes on the part of management to employees, for instance, the chair’s to a lecturer, are not rare). Most popular and widespread are such indicators as the number of references to the researcher’s publications and Hirsch index. The most important advantage of the index is in the fact that it easily deals with the contradiction between volume and efficiency (quantity and quality of publications). Owing to Hirsch index evaluation, a researcher does not head for publications ‘splitting’ so as to increase their quantity (which undoubtedly leads to the growth of ‘pseudo-scientific rubbish’, but is aimed at producing quality papers with high impact. On the other hand, the Hirsch index does not ‘forbid’ publishing new scientific works (the researcher does not fear lest the new publications should have fewer references which in turn will ‘decrease’ productivity, the way it had been before the Hirsch index). The Hirsch index is calculated for research groups as well: the research group has i index provided that at least the i-number of researchers have individual Hirsch index indicators of at least the i-value. This particular index is of essential humanistic importance. Firstly, it focuses on the principle – ‘outstanding employees make an outstanding organization’. Secondly, this indicator does not ‘push’ management towards firing less efficient researchers, but shows them objectives for the professional growth.

Nevertheless, the citation-based indicators are not deprived of some flaws. Invariant to the citation primary data processing method, these flaws are as follows.

Firstly, it is not always possible to determine whether the particular bibliographic reference is relevant. Secondly, different sources from one and the same references list can have different relevance towards the publication. For instance, some works which are referred to in the paper may play a fundamental role for that very paper (source link/referrer), whereas others can only be of secondary importance. In their essence, the bibliometric indicators based on citations should be considered to be indices of recognition by the scientific community, rather than indicators of quality of publication.

Thirdly, (the most important), it is absolutely impossible to determine the role of those publications from the reference list for the permanent (continuous) development of shientific knowledge. For instance, some A publication has scored a hundred of ‘side’ references (i.e. references by members of the scientific community who represent neither the research group, nor their co-authors), i.e. references that are really hard to provide by means of fraudulent schemes. But all of these one hundred publications – referrers to A publication – have a zero citation, in other words, are not of importance for the scientific community (from the formal point of view). Other B publication has only one reference to it in B1 publication, which is, in turn, has a reference in B2 publication; B2 publication is referred to in B3; B3 is related the same way to B4; B4 to B5; B5 to B6; B6 to B7; B7 to B8; B8 to B9. The role of B publication is quite obvious for permanent obtaining knew knowledge (for continuous research process, to be exact). However, here lies a paradox: A publication has a greater theoretical significance (formally), than B publication!

Thus, the indicators based on citation cannot fully reflect the real significance of research outcomes. This conclusion is supported by facts. What bibliometric parameters (e.g. the widely recognized Hirsch index) did J. Maxwell, M. Faraday, M. Lomonosov, D. Mendeleev, Archimedes (the list is incomplete due to the paper’s volume limitation) have? Can we imagine textbooks on physics without J. Maxwell’s and A. Popov’s achievements, on chemistry – without M. Lomonosov’s, D. Mendeleev’s and H. Cavendish’s contributions, on mathematics – without R. Descartes and N. Lobachevsky?

As a reminder, science as a sphere of activity is closely related to education. The latter’s most important task is the broadcast of experience (knowledge, in particular), accumulated by humanity (its mission, i.e. the global goal is in harmonization of human activity and society). Education must ‘keep up with life’, in other words, it has no right to fall behind from the society development, particularly science development. It is impossible without close interrelation (collaboration) between science and education.

The problem of the study lies in a question: what parameters objectively reflect the role of individual researchers and collective research groups in the educational process? The aim of the study is to elaborate models and methods for assessing the methodological significance of their research outcomes. The object of the study is science research activity of individual researchers and research groups, the subject of the study is methodological significance of research outcomes.

2. METHODOLOGY AND RESEARCH ORGANIZATION

2.1. Methodology

Methodological basis: systemic approach (research activity is viewed as a system process aimed at reaching certain results), sociological approach (science and education are viewed as interrelated social institutions playing a leading role in the development of society), qualimetric approach (multicriteria assessment of the significance of research outcomes is necessitated), metasystemic approach (research outcomes are viewed as a metasystem that includes relatively independent components).

Methods: analysis of the scientific and methodological literature and management practices of the scientific and educational organizations, modeling, methods of set theory, and methods of quality control, methods of mathematical statistics.

2.2. Research Organization

The study was carried out on the basis of higher education institutions in the Krasnodar territory. Owing to Russian Science Citation Index (on eLIBRARY platform) were obtained primary data on research activities of Kuban’s university lecturers (n=768). Automated systemic and cognitive analysis made it possible to determine interrelation between scientific quality of publications and methodological significance of university lecturers’ input, as well as between the period of their research work and methodological significance of their scientific input. The integrated quality index of publications was evaluated according to a simplified scheme (without additional information about the publications). The period of their research work was established by the year of their first publication in the Russian Science Citation Index.

3. OUTCOMES

We believe that only those research findings that are applicable in the educational process (namely, in the content of education) can be considered significant. This point of view can be substantiated by the fact that the methodological activity is aimed at the implementation of research outcomes in practice. It is a ‘link’ between scientific research and practices (here pedagogical activity). In the context of ‘information explosion’ (henceforth, a rapid knowledge obsolescence), not only (and not so much) the forms of training, but the content must be modified continually. Without denying the importance of education informatization (the integration of pedagogic and information technologies, a typical example here is distance training and online educational resources), we realize that innovative forms of training do not ‘guarantee’ modern contents.

We firmly believe that the content of the methodologicalal support (in any forms – conventional or modern) of educational process must be modified continually; otherwise, education will not ‘keep pace with times’.

For example, not all methodological brochures which present monitoring as an information management mechanism show that the latter includes not only design and technology component (procedural control models) and criteria-assessment tools (partial criteria and integrative parameters of the control object), but science and methodology component as well (all possible control object models subject to its functioning conditions). Modern specialists mostly agree that monitoring is unthinkable without the models of the control system [7, 9-14]. Another example here is that not all methodological brochures which feature psychological aspects of control show modern model ideas on personal and professional qualities and competences (according to modern well-established views, they do not comprise only knowledge and skills, but represent systemic sets of knowledge, skills, reasons, values and personal experience in the corresponding sphere [1, 2, 4, 10].

In other words, research outcomes (both collective and individual) must have the methodological significance, i.e. be applicable as the content of methodological support for educational purposes. The level of the methodological significance of research outcomes of lecturing scientists must not be confused with the level of their methodological activities: the latter presupposes that the lecturers can provide continual upgrade of the methodological support for their courses making use of the outcomes of others (on condition of proper referencing avoiding plagiarism).

The identification procedure of the methodological significance of research outcomes (individual or collective) is rather a difficult task at least because science measuring databases (systems) are unable to obtain primary information (unlike the data for assessing scientific significance). The authors suggest identifying the level of the methodological significance of research outcomes as follows. Here :V is the volume in printer’s sheets (including text and other information elements), К1 is the coefficient of the information status in the academic course (varies from 0 to 1.0), К2 the coefficient of the information prevalence. The more accurate assessment can be carried out as follows: , where К3 is the coefficient (from 0 to 1.0) which shows the researcher’s or research group’s role in obtaining this information. For example, one of the authors of this paper devoted many publications to the models and methods of competence assessment (the research outcomes are used in the course ‘Development, analysis and management of software projects’), but it is highly unlikely that the coefficient of his role can exceed 0.5, as during at least two decades the Russian lecturing scientists community have been dealing with problems connected with competences and personal-professional qualities. When assessing the role of a researcher or a research group, it is necessary take into account their analysis of problems (the review of literature, standard-setting instruments, analysis of cutting-edge hands-on experience, etc.)

We suggest the following gradations for the status coefficient: 0.25 is the initial (independent), 0.5 – the consequent, 0.75 – the nuclear and 1.0 – the fundamental. For instance, when studying the course ‘Metrology, standardization and certification of software products’, issues like ‘Calibration and verification of measuring tools’ or ‘Certification schemes’ have the lowest status, issues like ‘Measurement automation’ or ‘Informatization of quality management’ have the middle status, issues like ‘Measurement classification’ or ‘System of quality management’ have the high status (0.75), the issue ‘Measurement as reality cognition method’ is fundamental. So as to determine the information status, it is expedient to form a cognitive model of the academic course, a directed graph, whose vertices are the didactic units; the arrows are links between them.

The coefficient of prevalence is determined on the basis of type and status of the methodological development (textbook, study guide endorsed by the Ministry of Education, study guide, online resource, etc.): , whereТ2 is the coefficient of the development status, P is the coefficient of prevalence. For online resources the coefficient of prevalence can be determined as the number of visits, for the printed material - , where V1 is the circulation.

On the other hand, one and the same scientific datum can have the methodological significance for different academic courses in different institutions of higher education. Therefore, the more objective here will be the following assessment: , where D is the number of uses of the analyzed scientific information in the methodological support of the educational process, Miis the methodological significance of the scientific information in the ith occurrence.

The authors suggest the integral indicator of the publication quality should be determined as follows:

.

Here: S is the coefficient dependent on the status of the publication (for instance, an article in an international scientometric system, a patent, a monograph, report summaries, etc., must vary from 0 to 1.0), N is its citation index, I is the coefficient dependent on the availability of extra information on the publication (must vary from 0 to 1.0 and be determined on the basis of expert evaluation method), C is the impact factor of the scientific journal which published the study, M is the methodological significance of the publication. This indicator can be deemed as an integral index of the scientific and methodological (not purely scientific) significance of the publication. The extra information on the publication can be the following: a compliance with the planned (especially funded) research work or R&D work, participation in different competitions (e.g. the journal ‘Secondary vocational education’ organized a competition for the best research paper in 2015) and so on. The citation index of publication is: . Here: N1 is the number of outside references to the publication, N3 is the number of the authors’ references to the publication (by any author of the research group), N2 is the number of references to the publication by any researcher co-authored with an author from the research group (according to a scientometric database). This model is suggested by the authors so as to stop attempts of artificial ‘improving’ bibliometric indicators.

On the other hand, it is quite often that the latest scientific data (that one can use for academic purposes) are not found in separate publications, but rather in a series of publications logically interrelated (mostly connected to one and the same subject of research). In this case, the index of the theoretical-methodological significance of research outcomes (individual or collective) , where F is the number of publications which in sum contain outcomes that in sum have the methodological significance M, Tj is the index of quality (based on purely theoretical significance) of the jth publication.

Analysing publishing activities of scientific researchers in the Krasnodar Territory (at universities and science research institutions) showed a close relationship between the author’s index of scientific quality of all their publications, on the one hand, and the index of their methodological significance, on the other hand. This correlation coefficient is 0.72.At the same time, the correlation coefficient between Hirsch index and methodological significance is considerably lower – 0.56. In other words, a high value of the complex indicator of publications theoretical significance (Hirsch index) does not guarantee a high quality of the displayed research outcomes (those results that can be displayed in education). Table 1 shows levels of methodological significance of scientific researchers in the Krasnodar Territory.

Table-1. Percentage of scientific researchers of the Krasnodar Territory demonstrating one or another methodological significance index of their works

Methodological significance index, pts. Percentage of scientific researchers, %
1. >1,2 1,2
2. 0,9-1,2 6,2
3. 0,6-0,9 12,6
4. 0,3-0,6 35,6
5. 0-0,3 44,4

Source:scientometric system in Russia - the Russian Science Citation Index

The relationship between the index of the methodological significance of the scientific researcher's work and the length of his research activity was also analyzed (as of December 2016).It was found thatall researchers whose methodological significance index isat 0.9 and higher have been researching for not less than 20 years. As for those with index in between 0.6 up to 0.9, 59 per cent of them have researched for not less than 20 years, the other 41 per cent - from 15 to 20 years. It can be explained by the fact that for getting certain results in research activities which can be applied in teaching one should have a very high research competence, above all its behavior component (personal research experience). The other researchers with the index from 0 to 0.6 have the following values for the length of their research activity: 19 per cent have been researching for 20 years, 39 per cent – from 15 to 20 years, 42 per cent – less than 15 years. In other words, a large period of research activity does not guarantee a high methodological significance of their work. The opposite forecast is more correct – a researcher with a small period of research activity absolutely cannot get the results with high methodological significance.

The authors contend that it is the integral index of the theoretical and methodological significance that will allow singling out the promising outstanding researchers – top lecturing researchers (and even outstanding high-ranked savants). For instance, it is impossible to determine the Hirsch index for Archimedes, Maxwell and other great scientists, but everyone knows no textbook on physics can do without their findings. Or, for instance, in 1975 J. Holland proposed the idea of evolutionary calculations (genetic algorithms) which got a powerful development, with thousands of textbooks, monographs and study guides containing these perspective methods of the applied mathematics.

4. CONCLUSION

An objective (comprehensive) assessment of research outcomes is an extremely difficult task. So, it is not surprising that the world scientific community has been trying to solve this problem for decades now. The majority of the used bibliometric parameters are based on the citation. Nevertheless, the authors of this study contend that the assessment of research outcomes significance will be comprehensive and objective only when the theoretical significance assessment is combined with their methodological significance evaluation. The given integrative indicators may be assessed even within short periods of time since the studies are published; yet the practical significance needs more time (sometimes decades and centuries, discoveries by C. Babbage, Fibonacci, etc.). Besides, the theoretical significance of research outcomes, an indicator that (without support of other parameters) does not show fully the social activity of academic educational setting, and rather poorly – the integration (interrelation) between science and education. However, a comprehensive parallel assessment of both theoretical and methodological significance of research outcomes (both individual and collective) will make it possible to evaluate properly both the social activity of academic educational setting and the interrelation between research and education.

The findings of this study allow us to suggest practical advice on how to improve research management in higher education institutions so as to strengthen the integration (interrelation) between research, methodological and educational activities. The authors contend that not only an individual lecturing researcher, but the research group, as well, must keep a portfolio on research, methodological and educational outcomes. It is highly advisable for an educational institution to hold competitions between its research groups. When analyzing the results of portfolios, one should appraise the engagement of the research outcomes in the methodological support (in various forms, including online educational resources) of education. The analysis of the results of such portfolio can be extremely useful when appraising the efficiency of science and research micro- and meso-environments (chairs and departments), as well as to implement the new ‘upgraded’ remuneration system.

This study is a logical continuation of the research project ‘Monitoring of continuing education quality’, undertaken with financial support from the Russian Foundation for Humanities Research (Project # 13-06-00350 as of 13th June 2013).

5. ACKNOWLEDGEMENTS

The study was carried out with the financial assistance provided by the Russian Foundation for Humanities for the project No. 16-03-00382 as of 17th March 2016 issue: ‘Monitoring of research activities of educational institutions in the information society’.

Funding: This study received no specific financial support.
Competing Interests: The authors declare that they have no competing interests.
Contributors/Acknowledgement: All authors contributed equally to the conception and design of the study.

REFERENCES

[1] E. V. Gavrilova, "Translation of scientific experience and tacit knowledge," Sotsiologicheskie Issledovaniya, vol. 9, pp. 28-35, 2015.View at Google Scholar 

[2] D. Z. Zalibekova, "Aspects of increasing the role of the scientific potential of the Russian Federation," Theory and Practice of Social Development, vol. 3, pp. 246-248, 2014.

[3] V. I. Loiko, "Modern models and methods for diagnosing the research activities of scientific and pedagogical teams," Polytechnical Network Electronic Scientific Journal of the Kuban State Agrarian University, vol. 112, pp. 1906-1933, 2015.

[4] N. A. Pashkus and V. Y. Pashkus, "Competitiveness of the university in the new economy: Approaches to evaluation," Theory and Practice of Social Development, vol. 12, pp. 122-127, 2014.

[5] Y. N. Tolstova and I. D. Voronina, "On the necessity to widen the notion of sociological measurement," Sotsiologicheskie Issledovaniya, vol. 7, pp. 67-77, 2012.View at Google Scholar 

[6] L. V. Yurkina, "Integration of science and education: Trends and opportunities," Theory and Practice of Social Development, vol. 2, pp. 147-149, 2014.

[7] V. A. Yasvin, Educational environment: From modeling to design. Moscow: Smysl, 2001.

[8] N. V. Eck and L. Waltman, "Generalizing the h- and g-indices," Journal of Informetrics, vol. 2, pp. 263-267, 2008.View at Google Scholar | View at Publisher

[9] F. Franceschini, D. Maisano, A. Perotti, and A. Proto, "Analysis of the ch-index: An indicator to evaluate the diffusion of scientific research output by citers," Scientometrics, vol. 85, pp. 203-217, 2010.View at Google Scholar | View at Publisher

[10] R. Iskander, L. Pettaway, L. Waller, and S. Waller, "An analysis of higher education leadership in the United Arab Emirates," Mediterranean Journal of Social Sciences, vol. 7, pp. 244-248, 2016.View at Google Scholar 

[11] R. S. Jonash and T. Sommerlatte, The innovation premium: How next generation companies are achieving peak performance and profitability. Cambridge, Massachusetts, 2000.

[12] N. Mat, J. Alias, and N. Muslim, "The impacts of organizational factors on knowledge sharing in higher learning institutions (HLIs): Case at university Kebangsaan Malaysia (UKM)," Mediterranean Journal of Social Sciences, vol. 7, pp. 181-188, 2016.View at Google Scholar | View at Publisher

[13] S. Sorooshian, N. F. Aziz, A. Ahmad, S. N. Jubidin, and N. M. Mustapha, "Review on performance measurement systems," Mediterranean Journal of Social Sciences, vol. 7, pp. 123-132, 2016.View at Google Scholar 

[14] J. J. G. Hoyo and C. J. Madariaga, "The debate on the concept of value: Interpretations from the perspective of economics and social anthropology," Mediterranean Journal of Social Sciences, vol. 7, pp. 11-20, 2016.View at Google Scholar | View at Publisher

Views and opinions expressed in this article are the views and opinions of the author(s), Journal of Asian Scientific Research shall not be responsible or answerable for any loss, damage or liability etc. caused in relation to/arising out of the use of the content.
Loading...