Research Tools papers

Asian Social Science; Vol. 9, No. 5; 2013
ISSN 1911-2017
E-ISSN 1911-2025
Published by Canadian Center of Science and Education
A Comparison between Two Main Academic Literature Collections:
Web of Science and Scopus Databases
Arezoo Aghaei Chadegani1, Hadi Salehi2, Melor Md Yunus3, Hadi Farhadi4, Masood Fooladi1, Maryam Farhadi1
& Nader Ale Ebrahim5
1
Department of Accounting, Mobarakeh Branch, Islamic Azad University, Mobarakeh, Isfahan, Iran
2
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
3
Faculty of Education, Universiti Kebangsaan Malaysia (UKM), Malaysia
4
School of Psychology and Human Development, Faculty of Social Sciences and Humanities, Universiti
Kebangsaan Malaysia (UKM), Malaysia
5
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring (IPPP),
University of Malaya, Malaysia
Correspondence: Arezoo Aghaei Chadegani, Department of Accounting, Mobarakeh Branch, Islamic Azad
University, Mobarakeh, Isfahan, Iran. Tel: 60-17-319-1093. E-mail: arezooaghaie2001@yahoo.com
Received: February 11, 2013
doi:10.5539/ass.v9n5p18
Accepted: March 18, 2013
Online Published: April 27, 2013
URL: http://dx.doi.org/10.5539/ass.v9n5p18
Abstract
Nowadays, the world’s scientific community has been publishing an enormous number of papers in different
scientific fields. In such environment, it is essential to know which databases are equally efficient and objective
for literature searches. It seems that two most extensive databases are Web of Science and Scopus. Besides
searching the literature, these two databases used to rank journals in terms of their productivity and the total
citations received to indicate the journals impact, prestige or influence. This article attempts to provide a
comprehensive comparison of these databases to answer frequent questions which researchers ask, such as: How
Web of Science and Scopus are different? In which aspects these two databases are similar? Or, if the researchers
are forced to choose one of them, which one should they prefer? For answering these questions, these two
databases will be compared based on their qualitative and quantitative characteristics.
Keywords: web of science, scopus, database, citations, provenance, coverage, searching, citation tracking,
impact factor, indexing, h-index, researcher profile, researcher ID
1. Introduction
Web of Science (WOS) and Scopus are the most widespread databases on different scientific fields which are
frequently used for searching the literature (Guz & Rushchitsky, 2009). WOS from Thomson Reuters (ISI) was
the only citation database and publication which covers all domains of science for many years. However,
Elsevier Science introduced the database Scopus in 2004 and it is rapidly become a good alternative (Vieira &
Gomes, 2009). Scopus database is the largest searchable citation and abstract source of searching literature
which is continually expanded and updated (Rew, 2009). WOS is challenged by the release of Scopus, an
academic literature database which is built on a similar breadth and scale. WOS and Scopus are expensive
products and it may not be feasible to obtain both of them. Therefore, by surfacing the new citation database,
Scopus, scientific libraries have to decide about which citation database can best meet the requests of the
consumers?
The competition between WOS and Scopus databases is intense. This competition has led to improvements in
the services offered by them. Recently, various papers have compared the coverage, features and citation analysis
capabilities of WOS and Scopus (Bakkalbasi, 2006; Burnham, 2006; LaGuardia, 2005; Deis & Goodman, 2005;
Dess, 2006; Li et al., 2010). These comparative studies of WOS and Scopus conclude that these two databases
are permanently improving. They also conclude that the significant advantage of choosing one of these two
sources depends on the particular subject’s area. Some researchers propose undertaking subject’s specific
analysis to find out which database work best for specific fields or time period (Bar-Ilan et al., 2007; Bakkalbasi
et al., 2006; Neuhaus & Daniel, 2008). Based on Lopez-Illescas et al. (2008), prior comparisons of these two
18
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
databases have not exposed a clear winner. They believe that the advantage of one database over another one
depends on what explicitly will be analyzed, the scientific field and also period of analysis. Due to this argument,
the existing issue is: which database is superior to use? Which one should you subscribe to? The answer to this
question is full of equivocation.
In this article, we will briefly analyze the journal evaluation capabilities of WOS and Scopus databases. In fact,
this article concentrates on features and capabilities of each product and these databases will be compared from
different aspects such as: 1) Provenance and coverage 2) Searching and analysis of results 3) Citation tracking
and citation analysis 4) Forming and costs 5) Impact factors 6) Indexing (h-index) 7) Researcher profile and ID
tools. Prior studies which compare these databases will be reviewed in this article.
2. Provenance and Coverage
Web of Science (WOS), products from “Thomson Reuters Institute of Scientific Information” (ISI), arises from
the Science Citation Index created by Eugene Garfield in 1960s. WOS includes above 10,000 journals and
comprises of seven different citation databases including different information collected from journals,
conferences, reports, books and book series. WOS citation databases are Social Sciences Citation Index (SSCI),
Science Citation Index Expanded (SCI Expanded), Conference Proceedings Citation Index Science (CPCI-S),
Arts and Humanities Citation Index (A&HCI) and Conference Proceedings Citation Index-Social Sciences and
Humanities (CPCI-SSH). It has also two chemistry databases named Index Chemicus (IC) and Current Chemical
Reactions (CCR-Expanded). Since WOS is the oldest citation database, it has strong coverage with citation data
and bibliographic data which goes back to 1900 (Boyle & Sherman, 2006). WOS claims it has the most depth and
the most quality however, Scopus burst in research domain in 2004 which claims to have the biggest database with
the wide range of records.
Scopus, officially named SciVerse Scopus, has introduced by Elsevier in November 2004 to the information
market. Scopus is the largest database existing on the market for multidisciplinary scientific literatures (Kuchi,
2004). Scopus covers more than 49 million records including trade publications, open-access journals, and book
series. Almost 80% of these records include abstract. It contains 20,500 peer-reviewed journals from 5,000
publishers, together with 1200 Open Access journals, over 600 Trade Publications, 500 Conference Proceedings
and 360 book series from all areas of science (Rew, 2009). Scopus offers the new sorting and refining feature for
researchers to access above 27 million citations and abstracts going back to 1960s (Boyle & Sherman, 2006).
Most of institutions in all over the world such as Latin America, Europe, North America, Australia, Asia and the
Middle East believe Scopus have positive influence on their researches. Based on Scopus report more than 50%
of its information are from the Middle East, Europe and Africa. Boyle and Sherman (2006) believe that choosing
Scopus is due to its quality of outcomes, time savings, ease of use and possible effect on research findings.
Vieira and Gomes (2009) using the set of Portuguese universities compare tow databases. They conclude that
Scopus provides 20% more coverage than WOS. Scopus covers broader journal range which is limited to recent
articles comparing to WOS. Their results reveal that 2/3 of these records can be found in both databases but 1/3
of the records are only in one database. Vieira and Gomes (2009) suggest the coverage of one journal by Scopus
database can have break. It means Scopus has a partial coverage for some journals. Checking the title list with
journals indexed in Scopus shows that for different journals, the coverage by Scopus is not complete. The
coverage comparison of WOS and Scopus databases provided by free online database comparison tool
‘JISC-ADAT’ is shown in figure 1.
Figure 1. JISC-ADAT coverage comparison of Web of Science and Scopus
19
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
3. Searching and Analysis of Results
WOS and Scopus are powerful databases which provide different searching and browsing options
(Lopez-Illescas et al., 2008). The search options in both databases are the Standard Basic and Advanced. There
are different searchable fields and several document types that permit the user to easily narrow their searching.
Both databases sort the results by parameters such as; first author, cites, relevance and etc. The Refine Results
section in both databases allows the user to quickly limit or exclude results by author, source, year, subject area,
document type, institutions, countries, funding agencies and languages. The resulting documents provide a
citation, abstract, and references at a minimum. Results may be printed, e-mailed, or exported to a citation
manager. The results may also be reorganized according to the needs of the researcher by simply clicking on the
headings of each column. Both databases provide users with the ability to set up a profile for personal settings
and managing their saved searches and alerts all under one tab. A further user option is the ability to browse
individual journals by simply clicking the Journals tab, locate the journal name and select the individual issue.
Figure 2 shows the search screen of WOS and Scopus databases.
Figure 2. Web of Science and Scopus search screen
The analysis of search results is competitive characteristics of these two databases. Therefore, these databases
attempt to improve their features to make searching easier for users. For example, WOS is able to create
histograms and rankings according to different parameters. This feature is a stronger tool than that available in
Scopus. Moreover, The Scientific Web Plus feature available through WOS enables user to see the search results
by clicking on the icon in the search result page. Scientific Web Plus helps users to find scientifically
concentrated open web content, author’s latest information and research scientists’ potential collaborators or
20
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
competitors, and complements searches. Cited Reference Search (CRS) is another features in WOS for finding
previously published and cited articles. WOS also has the “Related Records” feature to detect which authors are
citing the same records. The similar feature in Scopus provides the option to discover documents related to the
authors or keywords in a specific record.
Another capability of Scopus is journal analyzer which allows users to evaluate journals based on the number of
citations, articles published and percentage not cited. One of the weaknesses of Scopus is that it does not have
references before 1996. However, one of the important features that introduced by Scopus to online databases is
the presentation of faceted indexing. This faceted list of metadata has become a main delight to show in users’
training. But, WOS is one step upper than Scopus in representing metadata lists. In fact, search results in WOS are
not presented and analyzed by author, year, subject and document types like Scopus, but also by language,
institutions and countries in WOS.
4. Citation Tracking and Citation Analysis
Citation tracking and analysis have been documented as an important factor for evaluating the influence or
importance of specific record for a period of time. Researchers are able to evaluate records using citation data by
WOS and Scopus Citation Tracker. In fact, Citation tracking indicates the number of times that a particular work,
author or journal have been cited in other works. Citation tracking also allows you to track your own effect and
the effect of your institution. Citation tracking offers complete information for other authors and organizations that
have similar subject and identifies publications which have similar topics (Quint, 2006).
Citation tracking to journal article is another frequent factor of comparing Scopus and WOS (Bergman, 2012).
WOS analyzes the citations by document type, author, funding agency, country, organization’s name, language,
grant number, publication year and research title. WOS citation reports provide two charts, representing citations
in each year and published items in each year. This report also supplies the average citations per item, sum of
times cited and the h-index number (Jacso, 2005). Scopus also provides citation information by date of records’
citation found about specific article or other records. Similar to WOS, it has a feature to find how many times other
authors cited the same topic. Citation overview of Scopus is displayed by option of excluding self-citation with
h-index in a graph format (Jacso, 2005). Therefore, both databases allow authors and researchers to discover how
many citations an author or an article has received, to find the journals and authors who publish in your area of
interest, to explore citations for a specific journal issue, volume or year; to review the citations and work of other
authors (Rew, 2009). There are some studies that compare WOS and Scopus databases from the citation aspects
and they find different results (Levine & Gil, 2009; Haddow & Genoni, 2010; Bergman, 2012). Figure 3 shows
the citation reports in WOS and Scopus.
21
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
Figure 3. Citation reports in Web of Science and Scopus
Levine Clark and Gil (2009) compare two citation databases for citing references from fifteen business and
economic journals. They conclude that Scopus produced slightly higher citation numbers than WOS. Haddow
and Genoni (2010) also find that Scopus have more citations counts than WOS and cover more journals.
However, Bergman (2012) suggest that for the social welfare journals, Scopus commonly provides higher
citation counts than WOS, but the ranking from the highest to the lowest citation count for each database results
in similar outcome. Therefore, WOS and Scopus may likely provide correlated results in terms of ranking based
on citation counts for other journals. Kulkarni et al. (2009) argue that both WOS and Scopus produce
qualitatively and quantitatively diverse citations of articles published in medicine journals. However, Escalona et
al. (2010) find that there is a high similarity between WOS and Scopus in other fields such as Chemical
Engineering.
Jacso (2005) and Falagas et al. (2008) compare the weaknesses and strengths of WOS and Scopus. They confirm
that Scopus covers more journals and its analysis of citations is faster than WOS. However, WOS’s citation
analysis provides more understandable graphics and is more detailed than Scopus’s citation analysis. Meho and
Rogers (2008) investigate the differences of Scopus and WOS databases in the citation ranking, citation counting
and h-index. They find that no significant differences exist between WOS and Scopus if they are compared only
regarding their citation in journals. Levine Clark and Gil (2009) present the result of a comparison of two
databases; WOS and Scopus for fifty journals in business and economics. Citations from both databases are
analyzed to find out if one database is superior to another one, or whether Scopus could replace WOS. They
conclude that researchers may intent to use alternative sources to get a more comprehensive picture of scholarly
impact of one article. Bauer and Bakkalbasi (2005) compare the number of citations in WOS and Scopus for
American Society Information Science and Technology journal for articles published from 1985 to 2000. They
find no difference in citation numbers between WOS and Scopus for articles published in 2000.
5. Forming and Costs
Regarding databases forming and shape, Burnham (2006) argues that Scopus is easy to follow even for beginner
users comparing WOS. LaGuardia (2005) believes that Scopus draws one and attract researcher in at first look.
It's easy to read because fonts and colours provide good contrast and it's less cluttered. Therefore, it is oriented
more toward topics searching than WOS.
Pricing is a critical issue as potential buyers pay attention to the cost and benefit outcomes of their decisions. A
cost comparison between WOS and Scopus is extremely difficult because information about their cost is closely
held by the database producers (Dess, 2006). The results in general show that databases pricing is a complex
issue which depends on the size of the organization, discounts which they negotiated and other elements as well.
Deis and Goodman (2005) claim that the estimation of WOS costs is about $100,000 per year for large
organizations. However, the cost of Scopus database is about 85-95% of the cost of WOS for the same
organizations. LaGuardia (2005) argues that WOS refused to provide pricing information but Scopus pricing is
set according to annual subscription fee with unlimited usage. Pricing varies based on the size of the
22
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
organizations from $20,000 to $120,000 per year. Regarding library budgets, it is highly difficult for any
organization to pay for both of these databases. Therefore, the decision about which one to choose is taken by the
sort of trade-offs of cost versus performance for each database.
6. Impact Factors
The most powerful index which is used by WOS and Scopus databases for ranking journals is the journal’s
impact factor (IF) (Garfield 2006). IF is calculated according to the citations of all articles published in the
journal during two years (Guz & Rushchitsky, 2009). Levine Clark and Gil (2009) define IF as a ratio between
citations and recent citable items published. Therefore, the journal’s IF is calculated through dividing the
numbers of this year citation to the source item which is published in that journal during the prior two years.
Researchers can obtain journals’ IF by using two online databases. WOS, Thomson Reuters’s product, annually
publishes and analyzes indexed journals’ IF. Scopus, Elsevier publisher’s product, also records the information
about journal citations, but does not report indexed journals’ IF, which could be obtained through manual
calculation of journal citations or other tools (Chou, 2012). Nowadays, it is an honour for each journal to indicate
the IF determined by the WOS. However, some scholars have expressed concern that WOS's IF does not
accurately reflect the true influence of social work journals within the discipline partially due to the fact that
many critical journals are left out of the ranking and calculations (Bergman, 2012).
Lopez Illescas et al. (2008) find that for journals indexed in WOS and Scopus databases, those in WOS indicate
higher IFs. On the contrary, those that are only covered by Scopus indicate lower IFs than if they are in both
databases. Furthermore, they find that the differences between two databases regarding citation are much lower
than the differences regarding coverage. Abrizah et al. (2012) compare the ranking, coverage, IF and subject
categorization of Information Science journals based on data from WOS and Scopus. These comparisons are
made according to factor scores reported in 2010 Journal Citation Reports. They use the Library of Congress
Classification System to compare IF and subject categorization. They find that there is high degree of similarity
in rank normalized IF of titles in both WOS and Scopus databases. Pislyakov (2009) reports that the IF ranking
of indexed journals between WOS and Scopus is partially different. However, Gary and Hodkinson (2008)
conclude that there is no significant difference between WOS and Scopus on journals’ IF ranking but these
studies only focus on business and science field. Meho and Sugimoto (2009) investigate differences between
WOS and Scopus to evaluate the scholarly impact of fields which focus on frequently research domains and
institutions, citing journals, conference proceedings, and all citing countries. Their results indicate that when
they assess the smaller citing entities such as journals, institutions and conference proceedings, both databases
produce significantly different results. However, when they assess larger citing entities such as research domains
and countries, they produce similar scholarly impact.
7. Indexing (H-Index)
WOS database is a more scholarly source than Scopus database because of more indexing (Fingerman, 2006).
Some indices are proposed by WOS and Scopus databases such as h-index, g-index, impact factor, the Eigen
factor metric for journal ranking, source normalized impact per paper and relative measure index (Hirsch, 2005;
Egghe, 2006; Bergstrom, 2007; Moed, 2010; Raj & Zainab, 2012). The h-index is a well known metric for
assessing the researcher’s scientific effects (Raj & Zainab, 2012). For measuring h-index, the publication records
of an author, the number of papers published during the selected number of years and the number of citations for
each paper are considered (Moed, 2010). Glanzel (2006) believes that the advantage of h-index is that it
combines the assessment of both the number of papers (quantity) and the impact or citations to these papers
(quality). The h-index is automatically computed in both databases for every author and collections of articles
which are selected by the user.
Jacso (2011) verifies h-index, which is used by researchers to measure research productivity, for the Scopus
database. The results reveal that 18.7 million records in Scopus have one or more cited references, which
represents %42 of the whole database content. The cited references enhanced records ratio is rising from 1996 to
2009. Moreover, for other records of 23,455,354 published after 1995, the h-index is 1,339. Therefore, the total
number of citations should be at least 1,792,921. For the whole Scopus database of 44.5 million records the
h-index is 1,757 (Jacso, 2011). Bar-Ilan (2008) compares the h-index of a list of highly cited authors based on
citations counts recovered from the WOS and Scopus. The results show that there is no difference when citation
tool is used to calculate the h-index of scientists because the results of both databases are very close to each other.
Moreover, it seems to be disciplinary differences in the coverage of the databases. The differences in citation
numbers put science policy makers and promotion committees into the trouble.
23
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
8. Researcher Profile and ID Tools
Both WOS and Scopus databases make available an author search feature by distinguishing the author’s
affiliation, subject and journals to classify variations for the same author. They also have some different features
for researchers to upload their profile information. The Distinct Author Set is a discovery tool in WOS database
which shows several papers which written by one author. In fact, WOS analyzes the citation data, like journals
and subject areas for developing the distinct author set. However, in Scopus, there is a feature called The Author
Identifier. The user can enter the author’s name and will have an exact match of whole name of author, as well as
how his name appears in journal articles and other records. The Scopus Author Identifier (AI) matches author
names according to their affiliation, source title, subject area, address and co-authors. Consequently, the Scopus
AI helps researchers to get accurate and comprehensive information as quickly as possible without wasting time
on searching through long list of author’s names.
Moreover, the research ID feature in WOS allows users to upload their profile information (Vieira & Gomes,
2009). Researcher ID provides a solution to author ambiguity problem in the scholarly research community.
Each member is assigned with unique identifier to enable researcher to manage their publication lists, track their
times cited counts and h-index, identify potential collaborators and avoid author misidentification. Additionally,
your Researcher ID information integrates with the Web of Knowledge and is ORCID compliant, allowing you
to claim and showcase your publications from a single one account. Figure 4 shows Researcher ID feature in
WOS.
Figure 4. Researcher ID in Web of Science
9. Conclusion
Until 2004, the Web of Science (WOS) was the only international and multidisciplinary database available to
obtain the literature of technology, science, medicine and other fields. However, Elsevier introduced Scopus
which is become a good replacement (Vieira & Gomes, 2009). The information provide by these databases
specify the active journals in covering current and relevant research as well as prominent in shaping potential
research fields. The intense competition between these databases motivated researchers to compare them to
identify their similarities and differences. A numerous researches compare these databases from different aspects.
In this article, WOS and Scopus databases are compared based on qualitative and quantitative characteristics
such as provenance, citations, searching and special features by reviewing prior studies.
The comparison of WOS and Scopus discovers that WOS has strong coverage which goes back to 1990 and most
of its journals written in English. However, Scopus covers a superior number of journals but with lower impact
and limited to recent articles. Both databases allow searching and sorting the results by expected parameters such
as first author, citation, institution and etc. regarding impact factor and h-index, different results obtained from
prior studies. Although there is a high association between both databases, researchers interested to know why
authors prefer one database over the other one. For further studies, it is suggested to investigate the perceptions of
authors and researchers on both databases to find the reasons which make them to use one database more than the
other one. It could be helped databases to improve their features to provide better facilities.
24
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
References
Abrizah, A., Zainab, A., Kiran, K., & Raj, R. (2012). LIS journals scientific impact and subject categorization: a
comparison between Web of Science and Scopus. Scientometrics.
Bakkalbassi, N., Bauer, K., Glover, J., & Wang, L. (2006). Three options for citation tracking: Google Scholar,
Scopus and Web of Science. Biomedical Digital Libraries, 3(7). Retrieved from
http://www.bio-diglib.com/content/3/1/7.
Bar-Ilan, J. (2008). Which h-index? – A comparison of Web of Science, Scopus and Google Scholar.
Scientometrics, 74(2), 257-271. http://dx.doi.org/10.1007/s11192-008-0216-y
Bar-Ilan, J., Levene, M., & Lin, A. (2007). Some measures for comparing citation databases. Journal of
Informetrics, 1(1), 26-34. http://dx.doi.org/10.1016/j.joi.2006.08.001
Bauer, K., & Bakkalbasi, N. (2005). An examination of citation counts in a new scholarly communication
environment.
D-Lib
Magazine,
11(9).
Retrieved
November
25,
2007,
from
http://www.dlib.org/dlib/september05/bauer/09bauer.html http://dx.doi.org/10.1045/september2005-bauer
Bergstrom, C. (2007). Eigen factor: Measuring the value and prestige of scholarly journals. College & Research
Libraries News, 68(5), 314-316.
Boyle, F., & Sherman, D. (2006). Scopus: The product and its development. The Serials Librarian, 49(3),
147-153. http://dx.doi.org/10.1300/J123v49n03_12
Burnham, J. F. (2006). Scopus database:
http://dx.doi.org/10.1186/1742-5581-3-1
a
review.
Biomedical
Digital
Libraries,
3(1).
Chou, P. N. (2012). A Comparison Study of Impact Factor in Web of Science and Scopus Databases for
Engineering Education and Educational Technology Journals. Issues in Informing Science and Information
Technology, 9, 187-194.
Deis, L., & Goodman, D. (2005). Web of Science (2004 version) and Scopus. The Charleston Advisor, 6(3).
Retrieved from http://www.charlestonco.com/comp.cfm?id=43
Dess, H. M. (2006). Database reviews and reports: Scopus. Issues in Science and Technology Librarianship,
45(Winter). Retrieved from http://www.istl.org
Egghe, L. (2006). An improvement of the h-index: the g-index. ISSI Newsletter, 2(1), 8-9.
Escalona Fernández, M. I., Lagar Barbosa, P., & Pulgarín Guerrero, A. (2010). Web of Science Vs. Scopus: un
estudio cuantitativo en ingeniería química. Anales de Documentación, 13, 159-175.
Falagas, M. E., Kouranos, V. D., Arencibia-Jorge, R., & Karageorgopoulos, D. E. (2008). Comparison of
SCImago journal rank indicator with journal impact factor. The FASEB Journal, 22(8), 2623-2628.
http://dx.doi.org/10.1096/fj.08-107938
Fingerman, S. (2006). Web of Science and Scopus: Current features and capabilities. Issues in Science and
Technology Librarianship, 48(Fall). Retrieved from http://www.istl.org/06-fall/electronic2.html
Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA-Journal of the American
Medical Association, 295(1), 90-93. http://dx.doi.org/10.1001/jama.295.1.90
Gary, E., & Hodkinson, S. Z. (2008). Comparison of journal citation reports and Scopus impact factors for
ecology and environmental sciences journals. Issues in Science and Technology Librarianship, 54.
Glanzel, W., Schlemmer, B., Schubert, A., & Thijs, B. (2006). Proceedings literature as additional data source for
bibliometric analysis. Scientometrics, 68(3), 457-473. http://dx.doi.org/10.1007/s11192-006-0124-y
Guz, A. N., & Rushchitsky, J. J. (2009). Scopus: A system for the evaluation of scientific journals. International
Applied Mechanics, 45(4), 351-362. http://dx.doi.org/10.1007/s10778-009-0189-4
Haddow, G., & Genoni, P. (2010). Citation analysis and peer ranking of Australian social science journals.
Scientometrics, 85(2), 471-487. http://dx.doi.org/10.1007/s11192-010-0198-4
Hirsch, J. E. (2005). An index to quantify individual’s scientific research output. Proceedings of the National
Academy of Sciences, 102(46), 16569-16572. http://dx.doi.org/10.1073/pnas.0507655102
Jacso, P. (2005). As we may search – Comparison of major features of the Web of Science, Scopus, and Google
Scholar citation-based and citation-enhanced databases. Current Science, 89(9), 1537-1547.
Kulkarni, A. V., Aziz, B., Shams, I., & Busse, J. W. (2009). Comparisons of citation in Web of Science, Scopus
25
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
and Google Scholar for articles published in general medical journals. JAMA, 302, 1092-1096.
http://dx.doi.org/10.1001/jama.2009.1307
Laguardia, C. (2005). E-views and reviews: Scopus vs. Web of Science. Library Journal, 15.
Levine-Clark, M., & Gil, E. L. (2009). A comparative citation analysis of Web of Science, Scopus, and Google
Scholar.
Journal
of
Business
and
Finance
Librarianship,
14(1),
32-46.
http://dx.doi.org/10.1080/08963560802176348
Li, J., Burnham, J. F., Lemley, T., & Britton, R. M. (2010). Citation Analysis: Comparison of Web of Science,
Scopus, SciFinder, and Google Scholar. Journal of Electronic Resources in Medical Libraries, 7(3),
196-217. http://dx.doi.org/10.1080/15424065.2010.505518
Lopez-Illescas, C., Moya-Anegon, F., & Moed, H. F. (2008). Coverage and citation impact of oncological
journals in the Web of Science and Scopus. Journal of Informetrics, 2(4), 304-316.
http://dx.doi.org/10.1016/j.joi.2008.08.001
Meho, L. I., & Rogers, Y. (2008). Citation counting, citation ranking, and h-index of human-computer interaction
researchers: A comparison of Scopus and Web of Science. Journal of the American Society for Information
Science and Technology, 59(11), 1711-1726. http://dx.doi.org/10.1002/asi.20874
Meho, L. I., & Sugimoto, C. R. (2009). Assessing the scholarly impact of information studies: A tale of two
citation databases — Scopus and Web of Science. Journal of the American Society for Information Science
and Technology, 60(12), 2499-2508. http://dx.doi.org/10.1002/asi.21165
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3),
265-277. http://dx.doi.org/10.1016/j.joi.2010.01.002
Neuhaus, C., & Daniel, H. D. (2008). Data sources for performing citation analysis: an overview. Journal of
Documentation, 64(2), 193-210. http://dx.doi.org/10.1108/00220410810858010
Pislyakov, V. (2009). Comparing two thermometers: Impact factors of 20 leading economic journals ac-cording
to
Journal
Citation
Reports
and
Scopus.
Scientometrics,
79(3),
541-550.
http://dx.doi.org/10.1007/s11192-007-2016-1
Quint, B. (2006). Elsevier’s Scopus introduces citation tracker: challenge to Thomson ISI’s Web of Science?
Retrieved from http://newsbreaks.infotoday.com/nbreader.asp?ArticleID=16004
Raj, R. G., & Zainab, Z. N. (2012). Relative measure index: a metric to measure quality. Scientometrics.
http://dx.doi.org/10.1007/s11192-012-0675-z
Vieira, E. S., & Gomes, J. A. N. F. (2009). A comparison of Scopus and Web of Science for a typical university.
Scientometrics, 81(2), 587-600. http://dx.doi.org/10.1007/s11192-009-2178-0
26
Electronic physician; Volume 6, Issue 2, April-June 2014
http://www.ephysician.ir
Letter To Editor
Citation Frequency and Ethical Issue
Nader Ale Ebrahim
Ph.D. of Technology Management, Visiting Research Fellow, Research Support Unit, Centre of Research Services,
Institute of Research Management and Monitoring (IPPP), University of Malaya, Kuala Lumpur, Malaysia
Corresponding Author:
Dr. Nader Ale Ebrahim, Research Support Unit, Centre of Research Services, Institute of Research Management and
Monitoring (IPPP), University of Malaya, Kuala Lumpur, Malaysia. Tel: +60.37967 7812, Fax: +60.37967 7813,
Email: aleebrahim@um.edu.my
Keywords: citation, impact, ethics
Bibliographic Information of this article:
[Nader Ale Ebrahim. Citation Frequency and Ethical Issue. Electronic Physician, 2014; 6 (2): 814-815]. (ISSN:
2008-5842). http://www.ephysician.ir Available online at: http://www.ephysician.ir/2014/814-815.pdf
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Received: May.02.2014
Revised: 1st Revision: May.04.2014; 2nd Revision: May.10.2014
Accepted: May.10.2014
Published: May.11. 2014
Dear Editor:
I read your publication ethics issue on “bogus impact factors” with great interest (1). I would like to initiate
a new trend in manipulating the citation counts. There are several ethical approaches to increase the number of
citations for a published paper (2). However, it is apparent that some manipulation of the number of citations is
occurring (3, 4). Self-citations, “those in which the authors cite their own works” account for a significant portion of
all citations (5). With the advent of information technology, it is easy to identify unusual trends for citations in a
paper or a journal. A web application to calculate the single publication h-index based on (6) is available online (7,
8). A tool developed by Francisco Couto (9) can measure authors’ citation impact by excluding the self-citations.
Self-citation is ethical when it is a necessity. Nevertheless, there is a threshold for self-citations. Thomson Reuters’
resource, known as the Web of Science (WoS) and currently lists journal impact factors, considers self-citation to be
acceptable up to a rate of 20%; anything over that is considered suspect (10). In some journals, even 5% is
considered to be a high rate of self-citations. The ‘Journal Citation Report’ is a reliable source for checking the
acceptable level of self-citation in any field of study. The Public Policy Group of the London School of Economics
(LSE) published a handbook for “Maximizing the Impacts of Your Research” and described self-citation rates across
different groups of disciplines, indicating that they vary up to 40% (11).
Unfortunately, there is no significant penalty for the most frequent self-citers, and the effect of self-citation
remains positive even for very high rates of self-citation (5). However, WoS has dropped some journals from its
database because of untrue trends in the citations (4). The same policy also should be applied for the most frequent
self-citers. The ethics of publications should be adhered to by those who wish to conduct research and publish their
findings.
[electronicphysician@gmail.com]
Page 814
Electronic physician; Volume 6, Issue 2, April-June 2014
http://www.ephysician.ir
References
1. Jalalian M, Mahboobi H. New corruption detected: Bogus impact factors compiled by fake organizations.
Electron Physician. 2013;5(3):685-6. http://www.ephysician.ir/2013/685-686.pdf
2. Ale Ebrahim N, Salehi H, Embi MA, Habibi Tanha F, Gholizadeh H, Motahar SM, et al. Effective
Strategies for Increasing Citation Frequency. International Education Studies. 2013;6(11):93-9. doi:
10.5539/ies.v6n11p93, http://opendepot.org/1869/1/30366-105857-1-PB.pdf
3. Mahian O, Wongwises S. Is it Ethical for Journals to Request Self-citation? Sci Eng Ethics. 2014:1-3. doi:
10.1007/s11948-014-9540-1
4. Van Noorden R. Brazilian citation scheme outed. Nature. 2013;500:510–1. doi: 10.1038/500510a,
http://boletim.sbq.org.br/anexos/Braziliancitationscheme.pdf
5. Fowler JH, Aksnes DW. Does self-citation pay? Scientometrics. 2007;72(3):427-37. doi: 10.1007/s11192007-1777-2
6. Schubert A. Using the h-index for assessing single publications. Scientometrics. 2009;78(3):559-65. doi:
10.1007/s11192-008-2208-3
7. Thor A, Bornmann L. The calculation of the single publication h index and related performance measures:
a web application based on Google Scholar data. Online Inform Rev. 2011;35(2):291-300.
8. Thor A, Bornmann L. Web application to calculate the single publication h index (and further metrics)
based on Google Scholar 2011 [cited 2014 3 May]. Available from: http://labs.dbs.uni-leipzig.de/gsh/
9. Couto F. Citation Impact Discerning Self-citations 2013 [cited 2014 3 Ma]. Available from:
http://cids.fc.ul.pt/cids_2_3/index.php
10. Epstein D. Impact factor manipulation. The Write Stuff. 2007;16(3):133-4.
11. Public Policy Group L. Maximizing the impacts of your research: a handbook for social scientists. London
School of Economics and Political Science, London, UK.: 2011.
http://www.lse.ac.uk/government/research/resgroups/lsepublicpolicy/docs/lse_impact_handbook_april_201
1.pdf
[electronicphysician@gmail.com]
Page 815
"Electronic Physician" is indexed or listed in:
Islamic World Science Citation Center (ISC), Malaysia Citation Center (MCC), Index Copernicus, EBSCO
Host, Directory of Open Access Journals (DOAJ), Genamics Journal Seek , Open-J Gate, Iran Medex
Directory of Biomedical Journals, J Gate, Ind Medica, Geneva Foundation for Medical Education and
Research (GFMER), University of Chicago, Arkansas University, Johns Hopkins Libraries, University of
Miami, University of Arkansas, University of Zurich, Oregon Health & Science University, Directory of
Research Journals Indexing (DRJI), Utrecht University Library, John Brown University online library,
University Library of Regensburg (Electronic Journals Library), Universidad Veracruzana, Library catalog
of the University of Groningen, University library of Saskatchewan, and more.
Asian Social Science; Vol. 9, No. 5; 2013
ISSN 1911-2017
E-ISSN 1911-2025
Published by Canadian Center of Science and Education
Does Criticisms Overcome the Praises of Journal Impact Factor?
Masood Fooladi1, Hadi Salehi2, Melor Md Yunus3, Maryam Farhadi1, Arezoo Aghaei Chadegani1, Hadi Farhadi4
& Nader Ale Ebrahim5
1
Department of Accounting, Mobarakeh Branch, Islamic Azad University, Mobarakeh, Isfahan, Iran
2
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
3
Faculty of Education, Universiti Kebangsaan Malaysia (UKM), Malaysia
4
School of Psychology and Human Development, Faculty of Social Sciences and Humanities, Universiti
Kebangsaan Malaysia (UKM), Malaysia
5
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring (IPPP),
University of Malaya, Malaysia
Correspondence: Masood Fooladi, Department of Accounting, Mobarakeh Branch, Islamic Azad University,
Mobarakeh, Isfahan, Iran. E-mail: foladim57@gmail.com
Received: February 16, 2013
doi:10.5539/ass.v9n5p176
Accepted: March 18, 2013
Online Published: April 27, 2013
URL: http://dx.doi.org/10.5539/ass.v9n5p176
Abstract
Journal impact factor (IF) as a gauge of influence and impact of a particular journal comparing with other
journals in the same area of research, reports the mean number of citations to the published articles in particular
journal. Although, IF attracts more attention and being used more frequently than other measures, it has been
subjected to criticisms, which overcome the advantages of IF. Critically, extensive use of IF may result in
destroying editorial and researchers’ behaviour, which could compromise the quality of scientific articles.
Therefore, it is the time of the timeliness and importance of a new invention of journal ranking techniques
beyond the journal impact factor.
Keywords: impact factor (IF), journal ranking, criticism, praise, scopus, web of science, self-citation
1. Introduction
Citation is the reference text and recognition of article information. Generally, each article has a reference
section at the end including all references cited in that article. Each reference is called a citation. The frequency
of citations to an especial article by other articles means citation count. In addition, citation index as a type of
bibliographic database traces the references in a published article. Citation index shows that how many times an
article has been cited with other articles. In other words, one can find that which later article cites which earlier
articles. Citations are applied to measure the importance of information contained in an article and the effect of a
journal in related area of research activity and publications.
The first citation index for published articles in scientific journals is Science Citation Index (SCI), which is
founded by Eugene Garfield’s Institute for Scientific Information (ISI, previously known as Eugene Garfield
Associates Inc.) in 1960 (Nigam & Nigam, 2012.). Then, it was developed to produce the Arts and Humanities
Citation Index (AHCI) and the Social Sciences Citation Index (SSCI). The SSCI is one of the first databases
developed on the Dialog system in 1972. Small (1973) develops his efforts on Co-Citation analysis as a
self-organizing classification mechanism namely “Research Reviews”. Garner, Lunin, and Baker (1967)
describes that the worldwide citation network has graphical nature. Giles, Bollacker, and Lawrence (1998)
introduce the autonomous citation indexing, which can provide the automatic algorithmic extraction and classify
the citations for any digital scientific and academic articles. Subsequently, several citation indexing services are
created to automate citation indexing including Google Scholar, EBSCOhost, Institute for Scientific Information
(ISI) and Elsevier (Scopus).
Among all citation indexing services, ISI, has dominated the citation indexing career. ISI is a part of Thomson
Reuters, which publishes the citation indexes in form of compact discs and print format. One can access to the
products of ISI through the website with the name ‘Web of Science’ (WOS). It is possible to access seven
databases in WOS including: Social Sciences Citation Index (SSCI), Index Chemicus, Science Citation Index
176
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
(SCI), Arts & Humanities Citation Index (A&HCI), Current Chemical Reactions, Conference Proceedings
Citation Index: Science and Conference Proceedings Citation Index: Social Science and Humanities.
2. Impact Factor
One of the most important product of ISI is the journal impact factor (IF), which is first designed by Eugene
Garfield. IF is published by ISI as a standardized measure to reveal the mean number of citations to the
published articles in particular journal. Journal Citation Reports (JCR) defines the IF as the frequency of citation
to the “average papers” published in a journal in an especial year. In effect, IF is almost used by research
institutions, universities and governmental agencies to measure the influence and impact of a professional
journal comparing with other journals in the same area of research (Kuo & Rupe, 2007). IF is also applied to
evaluate the individual profession of scientists and researchers. Therefore, editors have a high tendency to
increase the IF of their journals.
In order to calculate the IF of a journal, the number of citations to the published papers in a journal, which is
reported in the JCR year, is divided by the total number of papers published in that journal during two previous
years. Citation may be comes from articles published in the same journal or different proceedings, journals or
books indexed by WOS. As an example, an IF of 3.0 reveals that, on average, the papers published in a particular
journal within two years ago, have been cited three times. Although the IF calculation for journals is based on
articles published and cited in the previous two years, one can calculate the average citation using longer time
periods.
The 5-year journal IF is the number of citations to a particular journal divided by the total number of papers
published in the journal since five years ago. In certain fields of study, the 5-year journal IF is more appropriate
than 2-years because the body of citations in 5-years basis might be large enough to make a reasonable
comparison. In some area of research, it takes a long time, more than two years, to publish a work and response
to previous articles.
The aggregate IF is applied for a subject category rather than a journal only (Khan & Hegde, 2009). The
calculation is the same way as the IF for a journal, but it considers the total number of citations to all journals in
a particular subject group and the number of papers published in these cited journals. The median IF refers to the
median value of IF for all journals in the subject category. The number of citations to a journal divided by the
total papers published in the journal in current year is known as immediacy index. The journal’s half-life in each
year refers to the median age of the article cited in JCR. As an example, if the value of journal’s half-life equals
to six in 2007, it means that the citations from 2002 until 2007 to the journal are 50% of all the citations to the
journal in 2007. The journal ranking table displays the ranking of a journal in related subject category based on
the journal IF (Khan & Hegde, 2009).
3. Praises for Impact Factor
IF as a measure of journals’ influence has several advantages and benefits for researchers as well as librarians,
knowledge managers and information professionals. Some of the advantages are listed below:
1) The primary advantage of impact factor is that it is very easy to measure.
2) Another advantage of IF is to mitigate the absolute citation frequencies. Since large journals provide a
considerable body of citable literature which cites to the small journals, IF discounts the benefits of large
journals in favour of small journals. In addition, there is a tendency to discount the benefits of high frequently
published journals in favour of less frequently published journals and also matured journals over the newly
established journals. In other words, journal IF is a remarkable measure for evaluating the journals since it
offsets the benefits of age and size among the journals.
3) IF is a useful tool to compare different research groups and journals. It is commonly applied to administer
scientific library collections. Librarians, knowledge managers and Information professionals encounter with
limited budgets when they aim to select journals for their institutions and departments. ISI provide them with the
IF as a measure to choose most frequently cited journals (Khan & Hegde, 2009). About 9000 social science and
science journals from 60 countries are indexed by Web of Knowledge.
4) IF for indexed journals are greatly available and easy to understand and use. IF is more acceptable and
popular than other alternative measures (Khan & Hegde, 2009).
4. Criticisms for Impact Factor
However, IF has several inherent shortcomings overcoming the advantages of IF. The following problems refer
to the calculation method of IF:
177
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
1) It should be noted that Thomson Scientific is a commercial institution, which sells its products and
evaluations to research institutions and publishers. Data and measures of IF are not publicly established and it is
not easy for scientists to access these data and measures. Therefore, it is not subject to a peer review process.
Researchers demand a fully transparency over how Thomson Scientific collect data and calculates citation
metrics. Thomson Scientific is unable to clarify the used data to support its published IF and hence its IF is not
reliable. Generally, findings of a scientific article could not be accepted by scientists when the primary data is
not available.
2) The coverage of database is not complete since Thomson Scientific excludes some specific types of sources
from the denominator. As an example, books (as a source for citations) are excluded from the database. Citations
from journals which are not indexed in the ISI are not considered in IF calculation (Khan & Hegde, 2009).
Falagas, Kouranos, Arencibia-Jorge, and Karageorgopoulos (2008) assert that one of the major shortcomings of
journal IF is considering only “citable” papers mainly original papers and reviews.
3) IF could not be an appropriate measure because it is an arithmetic mean of number of citations to each paper
and the arithmetic mean is not an appropriate measure (Joint Committee on Quantitative Assessment of Research
2008). IF does not consider the actual quality of research articles, their impressiveness, or the long-term impact
of journals. It should be noted that IF and citation indexes indicate a specific, and relatively uncertain, type of
performance. They do not provide a direct measure of quality (Moed, 2005). Therefore, Using the IF to derive
the quality of individual articles or their authors seems to be idle (Baum, 2011). In addition, the validity and
appropriate use of the IF as a measure of journal importance is a subject to controversy and another independent
examination might reproduce a different IF (Rossner, Van Epps, & Hill, 2007). IF make more controversy when
it is used to assess the articles published in the journals. IF appraises the reputation of the publishing journal not
the quality of the content in the single papers. In addition, Falagas et al. (2008) assert that IF takes into account
received citations only in a quantitative manner. It is conceivable that the published papers in a journal have a
greater influence on science if these papers are cited by articles with a higher scientific quality.
4) The two-year or five-year window for measuring the IF might be logical for some fields of studies with a fast
moving process of research, while it is not reasonable for some fields of study, which requires a long period for
research or empirical validation and also takes a long time for review. These kinds of research might take a time
longer than two years to be completed and then published. Therefore, citation to the original papers will not be
considered in the IF of publishing journal. In addition, articles, which are published in many years ago and are
still cited, have a significant impact on research area, but unfortunately, citation to these articles will not be
considered due to their old age.
5) Distributing the same value of IF to each article published in a same journal leads to excessive variability in
article citation, and provides the majority of journals and articles with the opportunity to free ride on a few
number of highly cited journals and articles. Only some articles are cited anywhere near the mean journal IF. In
effect, IF highly overstates the impact of most articles, while the impact of rest articles is greatly understated
because of the few highly cited articles in the journal (Baum, 2011). Therefore, journal IF could not portray the
individual impact of single articles and it is not much related to the citedness of individual articles in publishing
journal (Yu & Wang, 2007). Seglan (1997) states that IF is wrongly used to estimate the importance of a single
article based on where it is published. Since IF averages over all published papers, it underestimates the
importance of the highly cited papers whereas overestimating the citations to the average publications.
6) Although, it is feasible to evaluate the IF of the journals in which an individual has published papers, the IF is
not applicable to individual scientists or articles. ISI does not distribute a JCR for the humanities. The relevant
number of citations received by a single paper is a better measure of importance and influence of individual
papers as well as its authors (Khan & Hegde, 2009). Garfield (1998) caution regarding the “misuse in evaluating
individuals” by IF due to a wide variation from paper to paper within an individual journal.
7) The IF is highly depended on different disciplines such as physical sciences and mathematical because of the
speed with which articles are cited in a research area (Van Nierop, 2009). It is not logical to use IF as a measure
to compare journals across different disciplines. The absolute value of IF does not make any sense. For example,
a journal with the value of IF equal to two could not have much impact in fields and disciplines such as
Microbiology, while it would be impressive in Oceanography. Therefore, IF as a measure of comparison
between different fields and disciplines could be considered as invalid. In addition, this comparison has been
widely made not only between the journals, but also between scientists or university departments, while,
absolute IF could not estimate the scientific level of department. In an evaluation program, especially for
doctoral degree granting institutions, reviewers consider the IF of journals in examining the scholarly outputs. It
178
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
makes a bias in which some types of researches are undervalued and falsifies the total contribution of each
scientist (Khan & Hegde, 2009).
8) IF is not relevant in some disciplines such as engineering, where the main scientific outputs are technical
reports, conference proceedings and patents. Regarding the literature, IF is not a relevant measure because books
constitute the most important publications in literature, which cite other books.
9) The number of citations from journals in less universal languages and less-developed countries is understated,
since only ISI database are utilized to define the IF.
It is argued that the simple methodology applied in the calculation of IF provides editors with the opportunity to
use different practices in order to inflate the impact factor of their journals (Garfield, 1996; Hemmingsson,
Skjennald, & Edgren, 2002; The PLoS Medicine Editors, 2006). Reviewers may request from the authors to
expand the article’s citation before publishing the articles. This is usual in peer review process in order to
improve the quality of article. On the other hand, IF indicates the importance of a journal in related subject
category. Therefore, editors have a high tendency to increase their journal IF (Yu, Yang, & He, 2011). There are
several methods for a journal to increase its IF. One policy is to manipulate the IF (Dong, Loh, & Mondry, 2005).
References to a journal can be managed in order to increase the number of citations to a journal and hence raise
the IF (Smith, 1997). Therefore, these journals seem to have additional impact and their quality and quantity will
be increased. However, there is no actual improvement in the journal impact and influence. This manipulation of
IF will only result in artificial evaluation.
1) Self-citation is a common way to manipulate and increase the IF of journals (Miguel & Marti-Bonmati, 2002;
Fassoulaki, Papilas, Paraskeva, & Patris, 2002; Falagas & Kavvadia, 2006; Falagas & Alexiou, 2007; Yu et al.,
2011). Editorial board may force authors to expand the article’s citations not necessarily to improve the quality
of scientific articles but they aim to inflate the IF of their journal in order to artificially raising the journal's
scientific reputation (Wilhite & Fong, 2012). This is a business policy, which is called coercive citation. In effect,
editors require authors to cite papers published in the same journal (self-citation) before the journal accepts to
publish the article even if these citations are not relevant to the research. In their study, (Wilhite & Fong, 2012)
find that 20% of researchers in psychology, sociology, economics and multiple business disciplines has
experienced coercive citation. They find that journals with a lower IF have a high tendency for coercive citation
in order to inflate their IF. Another study documents that coercive citation is common in other scientific
disciplines.
2) Another policy to manipulate the IF is to focus on publishing review articles rather than research articles
because review articles have a large body of literature, which are highly citable (Andersen, Belmont, & Cho,
2006; Falagas et al., 2008; Arnold & Fowler, 2011). Therefore, review articles provide the higher IF for
publishing journals and these journals will have the highest IF in their respective research area. Journals have a
tendency to publish a large number of their articles, which are likely to receive a high citation in near future or
deny accepting papers such as case report in medical journals, which are not expected to be cited.
Considering the above drawbacks of IF, it is suggested that IF as a measure of journal evaluation should have
certain characteristics (Offutt, 2008). It should be independent from the length and number of the papers
published in the journal. In addition, it should consider essential advances in the area over short term or
incremental contributions. IF should have relative stability in a year to year basis. The most important is that IF
should evaluate the impact of a journal during a long period since some articles are still cited after 10, 20, or
even 50 years from the date of publication. In addition, we should consider the warnings by Garfield against
using to compare different IF and scientific field. Beside the IF we can consider other internet databases to
calculate the IF of journals or articles which is collaborating with Elsevier.
The journal IF is an indicator of journal performance, and as such many of the suggestions for performance
indicators could equally be used to the application of journal IF (Greenwood, 2007). In November 2007 the
European Association of Science Editors (EASE) published an official statement suggesting that journal IF can
be applied only and cautiously for assessing and comparing the importance and impact of journals, but not for
evaluating the single articles, and of course not for appraising the research programmes or researchers (EASE,
2007). The International Council for Science (ICSU) Committee on Freedom and Responsibility in the conduct
of Science (CFRS) issued a "Statement on publication practices and indices and the role of peer review in
research assessment" in July 2008, which suggest to consider only a limited number of articles published per
year by each scientist or even give a penalty to scientists due to excessive number of publications in each year
(Smith, 1997). The Deutsche Forschungsgemeinschaft (2010), German Foundation for Science, issued new
strategies to assess only papers and no bibliometric information on candidates in decision making process
179
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
regarding:
"...performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding
proposals, [where] increasing importance has been given to numerical indicators such as the h-index and
the impact factor".
The citation style of manipulated journals, especially the number of self-citations in the last two year appears to
be unusual. It can be seen that the self-cited rate of manipulated journals is higher than that of the normal journal.
Therefore, self-citation rate can be used as a measure to specify the manipulated journals and calculate the real
IF by deducting the self-citations from the reported IF. However, it is a difficult task to specify manually each
journal in the JCR database. As discussed, journal might publish a larger number of review articles in order to
inflate their IF. Therefore, the Thomson Scientific website can provide directions for removing review articles
from the calculation (Khan & Hegde, 2009).
5. Conclusion
Although IF is a popular measure of quality for journals, we conclude that journal IF has its own restrictions. We
believe that there should be more examination of whether and how IF evaluates journal quality before it is
broadly accepted as a measure of journal quality. The IF measurement would certainly never assess the peer
review process of journals (Offutt 2008). It means that IF is not an appropriate measure to assess the quality of
published articles. One important factor in assessing a progress case is to consider the number of published
articles but it is an imperfect method because it considers the number of articles only instead of the quality of
publications. However, it is very hard to assign professionals to read articles and give an independent opinion
about the impact of particular scientist on his or her research area.
The implication for researchers and journals is that they should not rely only on this indicator. If we do not
consider the above limitations associated with IF, decisions made based on this measure is potentially misleading.
Generally, a measure will fall into disuse and disrepute among the scientific community members, if it is found
to be invalid, unreliable, or ill-conceived. Although IF has many limitations as discussed in this paper, it does not
lose its reputation and application by the scientific society. Indeed, IF attracts more attention and being used
more frequently by scientists and librarians, knowledge managers and information professionals. Critically,
extensive use of IF may result in destroying editorial and researchers’ behaviour, which could compromise the
quality of scientific articles. Calculation of IF and policies to increase the IF by journals, may push researchers to
consider publication as a business rather than contribution to the area of research. It is not fare that we should
rely on such a non-scientific method which IF appraises the quality of our efforts. It is the time of the timeliness
and importance of a new invention of journal ranking techniques beyond the journal impact factor. There should
be a new research trend with the aim of developing journal rankings that consider not only the raw number of
citations received by published papers, but also the influence or importance of documents which issue these
citations (Palacios-Huerta & Volij, 2004; Bollen, Rodriguez, & van de Sompel, 2006; Bergstrom, 2007; Ma,
Guan, & Zhao, 2008). The new measure should represent scientific impact as a function not of just the quantity
of citations received but of a combination of the quality and the quantity.
References
Andersen, J., Belmont, J., & Cho, C. T. (2006). Journal impact factor in the era of expanding literature. Journal
of Microbiology, Immunology and Infection, 39, 436-443.
Arnold, D. N., & Fowler, K. K. (2011). Nefarious numbers. Notices of the American Mathematical Society, 58(3),
434-437.
Baum, J. A. C. (2011). Free-riding on power laws: Questioning the validity of the impact factor as a measure of
research
quality
in
organization
studies.
Organization,
18(4),
449-466.
http://dx.doi.org/10.1177/1350508411403531
Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals, College & Research
Libraries News, 68(5), 314-316.
Bollen, J., Rodriguez, M. A., & Sompel, H. (2006). Journal status, Scientometrics, 69(3), 669-687.
http://dx.doi.org/10.1007/s11192-006-0176-z
Dong, P., Loh, M., & Mondry, A. (2005). The impact factor revisited. Biomedical Digital Libraries, 2.
European Association of Science Editors (EASE). (2010). Statement on inappropriate use of impact factors.
Retrieved from http://www.ease.org.uk/publications/impact-factor-statement
Falagas, M. E., & Alexiou, V. G. (2007). Editors may inappropriately influence authors’ decisions regarding
180
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
selection of references in scientific articles. International Journal of Impotence Research, 19, 443-445.
http://dx.doi.org/10.1038/sj.ijir.3901583
Falagas, M. E., & Kavvadia, P. (2006). Eigenlob: Self-citation in biomedical journals. Federation of American
Societies for Experimental Biology Journal, 20, 1039-1042. http://dx.doi.org/10.1096/fj.06-0603ufm
Falagas, M. E., Kouranos, V. D., Arencibia-Jorge, R., & Karageorgopoulos, D. E. (2008). Comparison of
SCImago journal rank indicator with journal impact factor. The Federation of American Societies for
Experimental Biology Journal, 22(8), 2623-2628.
Fassoulaki, A., Papilas, K., Paraskeva, A., & Patris, K. (2002). Impact factor bias and proposed adjustments for
its
determination.
Acta
Anaesthesiologica
Scandinavica,
46,
902-905.
http://dx.doi.org/10.1034/j.1399-6576.2002.460723.x
Garfield, E. (1996). How can impact factors be improved? British Medical Journal, 313, 411-413.
http://dx.doi.org/10.1136/bmj.313.7054.411
Garfield, E. (1998). Der impact faktor und seine richtige anwendung. Der Unfallchirurg, 101(6), 413-414.
Garner, R., Lunin, L., & Baker, L. (1967). Three Drexel information science research studies. Madison: Drexel
Press.
Giles, C. L., Bollacker, K. D., & Lawrence, S. (1998). An automatic citation indexing system, Digital Libraries
98- Third ACM Conference on Digital Libraries, New York, 89-98.
Greenwood, D. C. (2007). Reliability of journal impact factor rankings. BMC Medical Research Methodology,
7(48), 1-6.
Hemmingsson, A., Mygind, T., Skjennald, A., & Edgren, J. (2002). Manipulation of impact factors by editors of
scientific
journals.
American
Journal
of
Roentgenology,
178(3),
767-767.
http://dx.doi.org/10.2214/ajr.178.3.1780767
Joint Committee on Quantitative Assessment of Research. (2008, June 12). Citation Statistics, 1-26.
Khan, & Hegde, (2009). Is impact factor true evaluation for ranking quality measure? Journal of Library &
Information Technology, 29(3), 55-58.
Kuo, W. & Rupe, J. (2007). R-impact: reliability-based citation impact factor. IEEE Transactions on Reliability
56(3), 366-367. http://dx.doi.org/10.1109/TR.2007.902789
Ma, N., Guan, J., & Zhao, Y. (2008). Bringing PageRank to the citation analysis. Information Processing and
Management, 44(2), 800-810. http://dx.doi.org/10.1016/j.ipm.2007.06.006
Miguel, A., & Marti-Bonmati, L. (2002). Self-citation: comparison between Radiologia, European radiology and
radiology for 1997–1998. European Radiology, 12(1), 248-252. http://dx.doi.org/10.1007/s003300100894
Moed, H. F. (2005). Citation analysis in research evaluation. Information Science and Knowledge Management.
Springer.
Nigam, A., & Nigam, P. K. (2012). Citation index and impact factor. Indian Journal of Dermatology,
Venereology, and Leprology, 78(4), 511-516. http://dx.doi.org/10.4103/0378-6323.98093
Offutt, J. (2008). The journal impact factor. Journal of software testing verification and reliability, 18(1).
Palacios-Huerta, I., & Volij, O. (2004). The measurement of intellectual influence. Econometrica, 72(3), 963-977.
http://dx.doi.org/10.1111/j.1468-0262.2004.00519.x
Rossner, M., Van Epps, H., & Hill, E. (2007). Show me the data. Journal of Cell Biology, 179(6), 1091-1092.
http://dx.doi.org/10.1083/jcb.200711140
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British
Medical Journal, 314(7079), 498-502. http://dx.doi.org/10.1136/bmj.314.7079.497
Small, H. (1973). Co-citation in the scientific literature: A new measure of the relationship between two
documents. Journal of the American Society for Information Science, 24(4), 265-269.
http://dx.doi.org/10.1002/asi.4630240406
Smith, R. (1997). Journal accused of manipulating impact factor. British Medical Journal, 314(15), 461.
http://dx.doi.org/10.1136/bmj.314.7079.461d
The PLoS Medicine Editors. (2006). The impact factor game: It is time to find a better way to assess the
scientific literature. PLoS Med., 3, 291. http://dx.doi.org/10.1371/journal.pmed.0030291
181
www.ccsenet.org/ass
Asian Social Science
Vol. 9, No. 5; 2013
Van Nierop, E. (2009). Why do statistics journals have low impact factors? Statistica Neerlandica, 63(1), 52-62.
http://dx.doi.org/10.1111/j.1467-9574.2008.00408.x
Wilhite, A. W., & Fong, E. A. (2012). Coercive citation in academic publishing. Science, 335(6068), 542-543.
http://dx.doi.org/10.1126/science.1212540
Yu, G., & Wang, L. (2007). The self-cited rate of scientific journals and the manipulation of their impact factors.
Scientometrics, 73(3), 356-366. http://dx.doi.org/10.1007/s11192-007-1779-8
Yu, G., Yang, D. H., & He, H. X. (2011). An automatic recognition method of journal impact factor manipulation.
Journal of Information Science, 37(3), 235-245. http://dx.doi.org/10.1177/0165551511400954
182
Australian Journal of Basic and Applied Sciences, 7(4): 198-202, 2013
ISSN 1991-8178
Does it Matter Which Citation Tool is Used to Compare the h-index of a Group of
Highly Cited Researchers?
1
Hadi Farhadi, 2Hadi Salehi, 3Melor Md Yunus, 4Arezoo Aghaei Chadegani, 4Maryam Farhadi,
4
Masood Fooladi and 5Nader Ale Ebrahim
1
School of Psychology and Human Development, Faculty of Social Sciences and Humanities,
Universiti Kebangsaan Malaysia (UKM), Malaysia
2
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad,
Isfahan, Iran
3
Faculty of Education, Universiti Kebangsaan Malaysia (UKM), Malaysia
4
Department of Accounting, Mobarakeh Branch, Islamic Azad University, Mobarakeh, Isfahan, Iran
5
Department of Engineering Design and Manufacture, Faculty of Engineering, University of Malaya,
Kuala Lumpur, Malaysia
Abstract: h-index retrieved by citation indexes (Scopus, Google scholar, and Web of Science) is used
to measure the scientific performance and the research impact studies based on the number of
publications and citations of a scientist. It also is easily available and may be used for performance
measures of scientists, and for recruitment decisions. The aim of this study is to investigate the
difference between the outputs and results from these three citation databases namely Scopus, Google
Scholar, and Web of Science based upon the h-index of a group of highly cited researchers (Nobel
Prize winner scientist). The purposive sampling method was adopted to collect the required data. The
results showed that there is a significant difference in the h-index between three citation indexes of
Scopus, Google scholar, and Web of Science; the Google scholar h-index was more than the h-index in
two other databases. It was also concluded that there is a significant positive relationship between hindices based on Google scholar and Scopus. The citation indexes of Scopus, Google scholar, and Web
of Science may be useful for evaluating h-index of scientists but they have some limitations as well.
Key words: h-index, Scopus, Google Scholar, Web of Science, Nobel Prize, Physics, Chemistry,
Economic Sciences.
INTRODUCTION
Nowadays, when citation information is needed there are a few sources to rely on. The most comprehensive
web version of citation indexes sources are Scopus (http://www.scopus.com/), Google Scholar
(http://scholar.google.com/), and the Web of Science (http://portal.isiknowledge.com/). Scopus, Google scholar,
and Web of Science are regarded as the most useful and trustful source for searching. They are valuable tools
for searching which provide the citation searching and ranking by times cited (Mikki, 2009). These three
sources of citation indexes are multidisciplinary and international coverage and used to assess the scientific
output worldwide (Bar‐Ilan, 2008).
Background:
To quantify the research output of a single scientist, Hirsch (2005) introduced the h-index. h-index is
calculated by the number of publications and the number of citations received. As defined by Hirsch (2005), a
scientist has index h if h of his or her Np papers have at least h citations each and the other (Np-h) papers have ≤
h citations each. It is a single number which supports a good representation of the scientific lifetime
achievement (Egghe & Rousseau, 2006; van Raan, 2006). For example, if a scientist has published 15 papers
that each had at least 15 citations, h-index of him/her will be 15. Although this single number is simple to
compute and it takes into account both the quantity and impact of the papers, it is essential to consider which
database is reporting this single number (h-index).
h-index is computed based upon the data from the aforementioned citation indexes to measure the scientific
output of a researcher. The data from these citation indexes are important for scientists and universities in all
over the world. For instance, h-index is one of the impressive and effective factors used by promotion
committees to evaluate research productivity and impact at universities. Although, citing and cited documents
from these citation tools are associated to each other, h-index can be obtained separately from different citation
databases of Scopus, Google Scholar, and Web of Science. That is because each of these citation indexes has a
Corresponding Author: Hadi Farhadi (School of Psychology and Human Development, Faculty of Social Sciences and
Humanities, Universiti Kebangsaan Malaysia (UKM), Malaysia
E-mail: farhadihadi@yahoo.com
198
Aust. J. Basic & Appl. Sci., 7(4): 198-202, 2013
different collection policy which influences both the publications covered and the number of citations (Bar‐Ilan,
2008). Hence, the outputs and the results from these sources of citation indexes should not be approximately
similar.
This study aims to distinguish the major differences on h-index of scientists between three citation
databases of Scopus, Google Scholar, and Web of Science. Therefore, based upon the literature, this study
attempts to answer the following research questions:
1) How much the results from these sources of citation indexes are different?
2) Can universes and promotion committees use the h-index of a scientist as a tool for synthesizing and
depicting the scientific production in a more exhaustive way?
To answer the above research questions, the current study investigated the differences between the outputs
and the results from these three citation databases (Scopus, Google Scholar, and Web of Science) based upon
the h-index of a group of highly cited researchers (Nobel Prize winner scientists).
Methodology:
There exist three commonly web versions of citation indexes sources to measure the h-index of a scientist.
Hence, this study computed the h-index for 12 Nobel Prize winners based upon data from Scopus, Google
Scholar, and Web of Science for three different fields of study namely physics, chemistry and economic
sciences. Then, we compared the h-index data of these 12 Nobel Prize winners to find out the correlation
between the h-index of three mentioned citation indexes sources. Purposive sampling method was used to
collect the required data. To do so, we used the official web site of the Nobel Prize (http://www.nobelprize.org/)
as our starting point, which lists the all Nobel Prize winners.
Every year, the Nobel Foundation in Stockholm, Sweden, presents an international award called “Nobel
Prize” to those who have made outstanding contributions to their disciplines. The Nobel Prize includes a medal,
a diploma, and cash award given annually to those who during the preceding year have given the greatest benefit
to human beings. Every year, the respective Nobel Committees invite thousands of university professors,
members of academies and scientists from different countries, previous Nobel laureates, and members of
parliamentary assemblies to submit candidates for the Nobel Prizes for the coming year. These nominators are
chosen in such a way that as many countries and universities as possible are represented over time.
For this study, we selected 12 scientists from the official website of the Nobel Prize who have won the
Nobel Prize in three different fields of physics, chemistry, and economic sciences. Then the name of each
scientist was searched in three citation indexes sources of Scopus, Google Scholar, and Web of Science (see
Figures 1, 2 and 3). We brought Serge Haroche (the physics Nobel prize 2012 winner) as an example in
following figures. As can be seen in the following figures, the h-index of Serge Haroche is 34, 35 and 21 based
upon Scopus, Google Scholar, and Web of Science, respectively.
Fig. 1: Serge Haroche’s h-index by Scopus is 34.
199
Aust. J. Basic & Appl. Sci., 7(4): 198-202, 2013
Fig. 2: Serge Haroche’s h-index by Google Scholar is 35.
Fig. 3: Serge Haroche’s h-index by Web of Science is 21.
RESULTS AND DISCUSSION
Table 1 shows the h-index of 12 Nobel Prize winner scientists in three different citation indexes of Scopus,
Google Scholar and Web of Science.
Table 1: h-index of 12 Nobel Prize winner scientists in three different citation indexes.
Noble Prize Winner’s Name
Year
Research field
Web of Science
Serge Haroche
2012
Physics
21
David J. Wineland
2012
Physics
20
Saul Perlmutter
2011
Physics
35
Brian P. Schmidt
2011
Physics
21
Robert J. Lefkowitz
2012
Chemistry
31
Brian K. Kobilka
2012
Chemistry
24
Dan Shechtman
2011
Chemistry
11
Akira Suzuki
2010
Chemistry
85
Alvin E. Roth
2012
Economic Sciences
4
Thomas J. Sargent
2011
Economic Sciences
11
Christopher A. Sims
2011
Economic Sciences
13
Peter A. Diamond
2010
Economic Sciences
6
200
Scopus
34
47
38
46
106
63
5
56
28
21
13
14
Google Scholar
35
23
32
62
167
70
13
79
68
77
64
61
Aust. J. Basic & Appl. Sci., 7(4): 198-202, 2013
Our first step in understanding the differences between Scopus, Google Scholar, and Web of Science based
upon the scientists’ h-index was to examine the means of each citation tool resource. Figure 4 shows the mean
of h-index for Nobel Prize winner scientists based upon data from Google Scholar (mean=62.58), Scopus
(39.25), and Web of Science (23.5). As can be seen in Figure 4, the mean of Google scholar h-index (mean =
62.58) for scientist in this research is more than the mean of Scopus h-index (mean = 39.25) and mean of Web
of Science h-index (mean = 23.5). The results can be reasonable because Google Scholar covers more articles
indexed than other two citation tools and it contains books and conference proceedings which may alter
considerable citation metrics (Vanclay, 2007; Bar‐Ilan, 2008; Mikki, 2009).
Fig. 4: Comparing the h-index of 12 Nobel Prize winners based upon data from Scopus, Google Scholar and
Web of Science.
Table 2 also shows the multiple regression between the h-index of 12 Nobel Prize winners based upon data
from Scopus, Google Scholar and Web of Science. The multiple regression results show that there is a positive
significant relationship between Scopus and Google Scholar h-index (β=.799, p<.05). This indicates that more
h-index in Scopus citation tool corresponds with more Google Scholar h-index. In other words, we can conclude
that the h-index of Scopus can predict the Google scholar h-index. However, this study could not find a
significant relationship between the h-index in Web of Science and Google Scholar h-index.
Table 2: Multiple regression between the h-index of 12 Nobel Prize winners based upon data from Scopus, Google Scholar and Web of
Science.
B
Beta
t
Constant
24.761
1.561
Web of Science
-.300
-.164
-.639
Scopus
1.143
.799
3.122*
R2= 0.54, F= 5.32, p<0.05
Based upon the samples’ field of studies, the results showed that the h-index mean of chemistry (mean =
59.17) is more than the h-index mean of physics (mean = 37.5) and economy (mean = 31.67). It means that the
citation and publication in chemistry area are more than those of physics and economy (see Figure 5).
Fig. 5: Comparing the h-index of 12 Nobel Prize winners based upon their research area.
201
Aust. J. Basic & Appl. Sci., 7(4): 198-202, 2013
Conclusions:
h-index retrieved by citation indexes of Scopus, Google Scholar, and Web of Science is used to measure the
scientific performance and the research impact studies based upon the number of publications and citations of a
scientist. It also is easily available and may be used for performance measures of scientists and for recruitment
decisions. As the results showed for the samples of this study, the difference in the h-index between three
citation indexes of Scopus, Google Scholar and Web of Science is obvious and noticeable. The Google Scholar
h-index is more in comparison with the h-index in two other databases. Moreover, the findings showed that
there is a significant relationship between h-indices based upon Google Scholar and Scopus.
We can conclude that it matters which citation tools are used to compute the h-index of scientists.
Therefore, the universities and promotion committees need to consider which citation tool is used to compute
the h-index of a researcher or a lecturer. In general, when we intend to report the h-index of a scientist, we need
to consider and mention the sources of citation indexes. We cannot compare the h-index of two scientists
without mentioning the sources of citation indexes. The citation indexes of Scopus, Google Scholar and Web of
Science might be useful for evaluating the h-index of scientists but they have their own limitations as well.
REFERENCES
Bar‐Ilan, J., 2008. Informetrics at the beginning of the 21st century: A review. Journal of Informetrics, 2(1):
1‐52.
Bar‐Ilan, J., 2008. Which h‐index? A comparison of WoS, Scopus and Google Scholar. Scientometrics,
74(2): 257‐271.
Egghe, L., R. Rousseau, 2006. An informetric model for the Hirsch-index, Scientometrics, 69(1): 121-129.
Google Scholar. (No Date). About Google Scholar. Retrieved 27 January 2013, from
http://scholar.google.com/.
Hirsch, J.E., 2005. An index to quantify an individual’s scientific research output. Proceedings of the
National Academy of Sciences, 102(46): 16569‐16572.
ISI WoS. Thomson Scientific. (No Date). Web of Science. Retrieved 27 January 2013, from http://
http://portal.isiknowledge.com/.
Mikki, S., 2009. Google Scholar Compared to Web of Science: A Literature Review. Nordic Journal of
Information Literacy in Higher Education, 1(1): 41‐51.
Nobel Prize. (No Date). About Nobel Prize. Retrieved 27 January 2013, from http://www.nobelprize.org/.
Scopus. (No Date). About Scopus. Retrieved 27 January 2013, from http://www.scopus.com/.
Van Raan, A.F.J., 2006. Comparison of the Hirsch-index with standard bibliometric indicators and with
peer judgment for 147 chemistry research groups. Scientometrics, 67(3): 491-502.
Vanclay, J.K., 2007. On the robustness of the h‐index. Journal of the American Society for Information
Science and Technology, 58(10): 1547‐1550.
202
International Journal of Management Science
and Business Administration
Volume 1, Issue 3, February 2015, Pages 6-15
Does a Long Reference List Guarantee More Citations?
Analysis of Malaysian Highly Cited and Review Papers
Nader Ale Ebrahim1, H. Ebrahimian2, Maryam Mousavi3, Farzad Tahriri3
1
Research Support Unit, Centre of Research Services, Institute of Research
Management and Monitoring (IPPP), University of Malaya, Malaysia
2
3
Institute of Mathematical Sciences, Faculty Science, University Malaya.
Centre for Product Design and Manufacturing, Department of Mechanical
Engineering, Faculty of Engineering, University of Malaya, 50603, Kuala Lumpur,
Malaysia
Corresponding author (e-mail):aleebrahim@um.edu.my
Abstract: Earlier publications have shown that the number of references as well as the number of
received citations are field-dependent. Consequently, a long reference list may lead to more
citations. The purpose of this article is to study the concrete relationship between number of
references and citation counts. This article tries to find an answer for the concrete case of
Malaysian highly cited papers and Malaysian review papers. Malaysian paper is a paper with at least
one Malaysian affilation. A total of 2466 papers consisting of two sets, namely 1966 review papers
and 500 highly-cited articles, are studied. The statistical analysis shows that an increase in the
number of references leads to a slight increase in the number of citations. Yet, this increase is not
statistically significant. Therefore, a researcher should not try to increase the number of received
citations by artificially increasing the number of references.
Key words: H-index, Citation analysis, Bibliometrics, Impact factor, Performance evaluation,
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
Relations between citations and references
1. Introduction
Researchers seeking citation tracking to find the most influential articles for a particular topic and to
see how often their own published papers are cited (Bakkalbasi et al. 2006). On the other hand
universities are looking for citations because of its influence in the university ranking (Ale Ebrahim
et al. 2013, Ioannidis 2010, Bornmann, Leydesdorff, and Wang 2014). A citation count is the
number of times a research work such as a journal article is cited by other works. The citation per
paper meaningfully influence a number of metrics, including total citation counts, citation speed, the
ratio of external to internal cites, diffusion scores and h-index (Carley, Porter, and Youtie 2013).
Citation counts still commonly use for the measure of research papers quality and reputation (Abt
and Garfield 2002). The number of citations that an article receives measured its impact on a
specific field (Lai, Darius, and Lerut 2012). Citation analysis is one of the most important tools to
evaluate research performance (Bornmann et al. 2012). Citation indicator is important for scientists
and universities in all over the world (Farhadi, Salehi, Yunus, et al. 2013). In the early stage, the
relationship between the number of references and the number of the paper citation was
investigated in the 1965 (UZUN 2006, de Solla Price 1965). A long reference list at the end of a
research paper may be the key to ensuring that it is well cited (Corbyn 2010, Ball 2008). Hence,
citation counts are correlated with reference frequencies (Abt and Garfield 2002). Webster,
Jonason, and Schember (2009) raised the question “Does the number of references an article
contains predict its citation count?” and found that reference counts explained 19% of the variance
in the citation counts. Lancho-Barrantes, Guerrero-Bote, and Moya-Anegón (2010) found that not
only the number, but also the citation impact of the cited references correlated with the citation
counts for a paper. The higher the impact of the cited references, the higher the later impact of the
citing paper (Bornmann et al. 2012). Review articles are usually highly cited compare to other types
of papers (Meho 2007).
Review papers represent the existing knowledge in a given field and more likely to be cited
(Alimohammadi and Sajjadi 2009). Several bibliometric studies highlighted that citation counts are a
function of many factors besides the scientific quality (Bornmann et al. 2012), length of paper (Abt
and Garfield 2002), visibility (Ale Ebrahim et al. 2014), optimize scholarly literature for academic
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
search engines (Beel, Gipp, and Wilde 2010), add the name of study in the title of all publications
(Sarli and Holmes 2011), publishing in a journal with higher impact factor (Vanclay 2013), internet
usage (Farhadi, Salehi, Embi, et al. 2013), gross domestic product (GDP) (Gholizadeh et al. 2014),
number of authors (Krause 2009), self-archiving (Gargouri et al. 2010), publish in an open access
journal (Swan 2010), collaborate with international authors (Pislyakov and Shukshina 2012), write
paper with a Nobel laureates (Ball 2011) and many other (Ale Ebrahim et al. 2013) including write a
review paper (Vanclay 2013) and use more references (Corbyn 2010). In this study the relationship
between number of references and citation counts is determined. Webster, Jonason, and Schember
(2009) mentioned “On average, review articles actually showed less of the relationship than
standard articles” (Corbyn 2010). So, in this research both review and standard articles (papers)
were investigated. 2466 articles consist of 1966 Malaysian review and 500 highly cited papers were
selected to examine the relationship between number of references and citation counts in the given
article.
2.Materials and methods
All data were obtained through Web of Science online academic database provided by Thomson
Scientific. This database included the necessary information to examine the relationship between
reference and citation counts for every review and highly cited papers published in Malaysia since
1980 to October 2013. Science Citation Index Expanded, Social Sciences Citation Index and Arts &
Humanities Citation Index, were searched for reviews and highly cited papers. For each paper, all
Bibliometrics data, especially the number of references and the number of times the paper has been
cited during the interval between the year of publication and the year 2013, have been
collected.Two samples set were selected: 1- The sample number one consisted of 1966 review
papers in all disciplines from Malaysia, according to the Web of Knowledge’s classification system.
Citation statistics produced by shorter than three years’ time frame may not be sufficiently stable
(Adams 2005, UZUN 2006). Because, papers appearing in the Web of Science databases over the
last few years, have not had enough time to accumulate a stable number of citations (Webster,
Jonason, and Schember 2009). Therefore, the time span limited from 1980 to November, 2010;
yielding a subsample of 721 publications (37% of the original sample). Publications with zero citation
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
were removed. In order to select the highly cited paper a threshold 10 times cited per year is
considered. The association between the number of references (independent variable) and time
cited per year (dependent variable) of highly cited review papers investigated with linear and nonlinear models. 2- The sample number two comprises 500 highly cited publications from Malaysia.
According to the Web Of Science classification, the results are obtained based on the article type
and exclude the review articles, editorial material, conference papers and book review.
3. Results and discussion
Two sets of data 1- 1966 review papers and 2- 500 high cited papers, were investigated separately.
The results and discussions are coming as follows.
Outliers for sample one (1966 review papers)
Due to the effect of the age of an article, the number of citations cannot be a reference of highly
cited paper. Therefore, the citation per year selected as a reference for highly cited paper. Papers
with 10 times cited per year is considered as highly cited paper. Figure 3-1 shows the number of
times cited per year for 660 review papers. A threshold was visually determined on 50 times cited
per year. Papers with more than 50 times cited yearly is called “extremely high cited paper” and
detected as outliers. Papers with more than 300 listed references also detected as outliers (3-2).
Figure 3-1 Number of times cited per year vs number of review papers references
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
Figure 3-2 Number of times cited per year vs number of references in review paper
Correlation analysis for sample one (1966 review papers)
The correlation between variables was modeled with regression model, linear model
y = α x + β and exponential model, non-linear model y = α eβx. The goodness of both model was
then measured with Spearman's rho
(ρ ) , Kendall's tau (τ ) and Pearson correlation coefficient (r ) .
The result of correlation analysis is summarized in 3-1.
Coefficient Statistic value P-value
Equation
ρ
0.127
0.193 y = 0.0639x + 10
0.0041x
τ
0.112
0.093 y = 10e
0.152
0.120
r
Model
Linear
Non-linear
Table 3-1 The result of correlation analysis of highly-cited review papers
The association between variables is graphically illustrated with scatter plots. The trend of these
associations was drawn with solid lines. Refer to Figure 3 and Figure 4, both linear and non-linear
models are not significantly fitted, trends are positive which support the hypothesis “For a given
review paper, increasing in the number of references may have result of increasing the times cited
per year”.
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
Figure 2-3 Relationship between number of references and citation counts in review papers (linear model)
Figure 3-3 Relationship between number of references and citation counts in review papers (Exponential
model)
Outlier detection for sample two (500 highly cited papers)
Papers with 10 times cited per year is considered as highly cited paper. Papers that cited more than
100 times per year is considered as extremely high cited paper and detected as an outlier. Figure 5
and Figure 6 are showing raw data and filtered data respectively.
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
Figure 3-4 Raw data - Number of times cited per year vs number of references 500 highly cited papers
Figure 3-5 Filtered data - Number of times cited per year vs number of references in 500 highly cited
papers
Correlation analysis for sample two (500 highly cited papers)
The association between the number of references (independent variable) and time cited per year
(dependent variable) of first 500 high cited papers investigated with linear and non-linear model
correlation analysis. The correlation was modeled with regression model, linear model y = α x + β
and exponential model, non-linear model y = α eβx. The goodness of fit was then measured with
Spearman's rho
(ρ ) ,
Kendall's tau
(τ ) and
Pearson correlation coefficient
(r ) .
The result of
correlation analysis is summarized in Table 3-2.
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
Coefficient Statistic value P-value
Equation
ρ
0.08
0.914 y = 0.03 x + 2.4
0.08 x
τ
0.04
0.467 y = 1.1 e
0.06
0.419
r
Model
Linear
Non-linear
Table 3-2 The result of correlation analysis of 500 highly cited papers
The association between variables is graphically illustrated with scatter plots. The trend of these
associations is shown by the solid lines. Figure 3-7 and Figure 3-8 shows, although both linear and
non-linear models are not significantly fitted, positive values of correlation coefficients are still
suggesting a positive trend (positive correlation) on the number of references and the number of
times cited per year.
Figure 3-6 Relationship between number of references and citation counts in 500 highly cited (linear model)
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
Figure 3-7 Relationship between number of references and citation counts
in 500 highly cited (Exponential Model)
4. Conclusion
This study shows that since the trend between the citation count and the number of references is
not statistically significant, we cannot conclude that there is a significant association between the
citation count of Malaysia review papers between the given period and number of references
contained in the paper. The correlation coefficient is not statistically significant. However, r = 0.152
based on the population of 721 articles. Malaysian review papers get more citations than other
types of papers. The number of references in the article has the lowest impact on the citation
compares with review paper. As this study looked only Malaysia review papers and 500 highly-cited
article, it would be necessary to conduct a similar study in the otherworld and types of papers. It
would be important to examine whether in other types of papers the relationship investigated here
have significant correlated or not. The research considered the general definition of citations.
Therefore, future studies may make a diffrentianain between “perfunctory citations” and “organic
citations” citations as Tang and Safer (2008) defined “perfunctory citations” is occurred only once
and in the introduction, “organic citations” as references cited for “conceptual ideas” and
“methodology and data” reasons.
ACKNOWLEDGEMENT
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
Sincere thanks to Dr. Bojan Obrenović and the International Journal of Management Science and
Business Administration’s board members for their useful advices.
References
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Abt, Helmut A., and Eugene Garfield. 2002. "Is the relationship between numbers of references
and paper lengths the same for all sciences?" Journal of the American Society for Information
Science and Technology 53 (13):1106-1112. doi: 10.1002/asi.10151.
Adams, Jonathan. 2005. "Early citation counts correlate with accumulated impact."
Scientometrics 63 (3):567-581. doi: 10.1007/s11192-005-0228-9.
Ale Ebrahim, Nader, Hadi Salehi, Mohamed Amin Embi, Farid Habibi Tanha, Hossein Gholizadeh,
and Seyed Mohammad Motahar. 2014. "Visibility and Citation Impact." International Education
Studies 7 (4):120-125. doi: 10.5539/ies.v7n4p120.
Ale Ebrahim, Nader, Hadi Salehi, Mohamed Amin Embi, Farid Habibi Tanha, Hossein Gholizadeh,
Seyed Mohammad Motahar, and Ali Ordi. 2013. "Effective Strategies for Increasing Citation
Frequency." International Education Studies 6 (11):93-99. doi: 10.5539/ies.v6n11p93.
Alimohammadi, Dariush, and Mahshid Sajjadi. 2009. "Correlation between references and
citations." Webology 6 (2):a71.
Bakkalbasi, Nisa, Kathleen Bauer, Janis Glover, and Lei Wang. 2006. "Three options for citation
tracking: Google Scholar, Scopus and Web of Science." Biomedical Digital Libraries 3 (1):7. doi:
10.1186/1742-5581-3-7.
Ball, Philip. 2008. "A longer paper gathers more citations." Nature 455 (7211):274-275. doi:
10.1038/455274a.
Ball, Philip. 2011. "Are scientific reputations boosted artificially?" Nature, 6 May.
Beel, Jöran, Bela Gipp, and Erik Wilde. 2010. "Academic Search Engine Optimization (ASEO)."
Journal of Scholarly Publishing 41 (2):176-190. doi: 10.3138/jsp.41.2.176.
Bornmann, L., L. Leydesdorff, and J. Wang. 2014. "How to improve the prediction based on
citation impact percentiles for years shortly after the publication date?" Journal of Informetrics
8 (1):175-180. doi: 10.1016/j.joi.2013.11.005.
Bornmann, Lutz, Hermann Schier, Werner Marx, and Hans-Dieter Daniel. 2012. "What factors
determine citation counts of publications in chemistry besides their quality?" Journal of
Informetrics 6 (1):11-18. doi: http://dx.doi.org/10.1016/j.joi.2011.08.004.
Carley, S., A. L. Porter, and J. Youtie. 2013. "Toward a more precise definition of self-citation."
Scientometrics 94 (2):777-780. doi: 10.1007/s11192-012-0745-2.
Corbyn, Zoë. 2010. "An easy way to boost a paper's citations."
Nature 406. doi:
10.1038/news.2010.406
de Solla Price, Derek J. 1965. "Networks of Scientific Papers." Science 149 (3683):510-515.
doi: 10.1126/science.149.3683.510.
Farhadi, Hadi, Hadi Salehi, Melor Md Yunus, Arezoo Aghaei Chadegani, Maryam Farhadi,
Masood Fooladi, and Nader Ale Ebrahim. 2013. "Does it Matter Which Citation Tool is Used to
Compare the h-index of a Group of Highly Cited Researchers?" Australian Journal of Basic and
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Nader Ale Ebrahim, H. Ebrahimian, Maryam Mousavi, Farzad Tahriri
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Highly Cited and Review Papers
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Applied Sciences 7 (4):198-202. doi: arXiv:1306.0727.
Farhadi, Maryam, Hadi Salehi, Mohamed Amin Embi, Masood Fooladi, Hadi Farhadi, Arezoo
Aghaei Chadegani, and Nader Ale Ebrahim. 2013. "Contribution of Information and
Communication Technology (ICT) in Country’S H-Index." Journal of Theoretical and Applied
Information Technology 57 (1):122-127. doi: 10.5281/zenodo.7715.
Gargouri, Yassine, Chawki Hajjem, Vincent Larivière, Yves Gingras, Les Carr, Tim Brody, and
Stevan Harnad. 2010. "Self-Selected or Mandated, Open Access Increases Citation Impact for
Higher Quality Research." PLoS ONE 5 (10):e13636. doi: 10.1371/journal.pone.0013636.
Gholizadeh, Hossein, Hadi Salehi, Mohamed Amin Embi, Mahmoud Danaee, Seyed Mohammad
Motahar, Nader Ale Ebrahim, Farid Habibi Tanha, and Noor Azuan Abu Osman. 2014.
"Relationship among Economic Growth, Internet Usage and Publication Productivity:
Comparison among ASEAN and World’s Best Countries." Modern Applied Science 8 (2):160170. doi: 10.5539/mas.v8n2p160.
Ioannidis, J. P. A. 2010. "Is there a glass ceiling for highly cited scientists at the top of research
universities?" Faseb Journal 24 (12):4635-4638. doi: 10.1096/fj.10-162974.
Krause, Kate. 2009. Increasing your Article's Citation Rates. Open Access Week. Accessed 28
May 2013.
Lai, Quirino, Tom Darius, and Jan Lerut. 2012. "The one hundred most frequently cited articles
in the field of clinical liver transplantation." Transplant International 25 (6):e76-e77.
Lancho-Barrantes, Bárbara S., Vicente P. Guerrero-Bote, and Félix Moya-Anegón. 2010. "What
lies behind the averages and significance of citation indicators in different disciplines?" Journal
of Information Science 36 (3):371-382. doi: 10.1177/0165551510366077.
Meho, Lokman I. 2007. "The rise and rise of citation analysis." Physics World 20:32–36.
Pislyakov, Vladimir, and Elena Shukshina. 2012. "Measuring Excellence in Russia: Highly Cited
Papers, Leading Institutions, Patterns of National and International Collaboration." Proceedings
of STI 2012, Montréal.
Sarli, Cathy, and Kristi Holmes. 2011. "Strategies for Enhancing the Impact of Research."
Washington University School of Medicine, Accessed 9 May. https://becker.wustl.edu/impactassessment/strategies.
Swan, Alma. 2010.
Tang, Rong, and Martin A. Safer. 2008. "Author-rated importance of cited references in biology
and psychology publications."
Journal of Documentation 64 (2):246-272. doi:
10.1108/00220410810858047.
Uzun, Ali. 2006. "Statistical relationship of some basic bibliometric indicators in scientometrics
research." International Workshop on Webometrics, Informetrics and Scientometrics & Seventh
COLLNET Meeting, Nancy (France), May 10 - 12, 2006.
Vanclay, Jerome K. 2013. "Factors affecting citation rates in environmental science." Journal of
Informetrics 7 (2):265-271. doi: http://dx.doi.org/10.1016/j.joi.2012.11.009.
Webster, Gregory D, Peter K Jonason, and Tatiana Orozco Schember. 2009. "Hot topics and
popular papers in evolutionary psychology: Analyses of title words and citation counts in
Evolution and Human Behavior, 1979–2008." Evolutionary Psychology 7 (3):348-362.
ISSN 1849-5664 (online)
http://researchleap.com/category/international-journal-of-management-science-and-business-administration
ISSN 1849-5419 (print) International Journal of Management Science And Business Administration Vol 1. No 3. February 2015., pp. 6 -15
Modern Applied Science; Vol. 8, No. 5; 2014
ISSN 1913-1844 E-ISSN 1913-1852
Published by Canadian Center of Science and Education
Economic Growth and Internet Usage Impact on Publication
Productivity among ASEAN’s and World’s Best Universities
Hossein Gholizadeh1, Hadi Salehi2, Mohamed Amin Embi3, Mahmoud Danaee4, Ali Ordi5, Farid Habibi Tanha6,
Nader Ale Ebrahim7 & Noor Azuan Abu Osman1
1
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur,
Malaysia
2
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
3
Faculty of Education, Universiti Kebangsaan Malaysia, Bangi, 43600, Malaysia
4
Faculty of Agriculture, Roudehen Branch, Islamic Azad University, Roudehen, Iran
5
Universiti Teknologi Malaysia, Advance Informatics School (AIS), Kuala Lumpur, Malaysia
6
Department of Financial Sciences, University of Economic Sciences, Tehran, 1593656311, Iran
7
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring (IPPP),
University of Malaya, Malaysia
Correspondence: Hadi Salehi, Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University,
Najafabad, Isfahan, Iran. E-mail: hadisalehi1358@yahoo.com
Received: July 3, 2014
doi:10.5539/mas.v8n5p169
Accepted: July 21, 2014
Online Published: September 1, 2014
URL: http://dx.doi.org/10.5539/mas.v8n5p169
Abstract
Measuring the number of papers which are published each year, publication productivity is the factor which
shows the reputation of universities and countries. However, the effect of growing economy and using internet
on the publication productivity in Asian countries has not been discovered yet. The present research is going to
figure out the publication productivity among the elite universities in Asian countries and also ten top
universities around the world in the last twenty years (from 1993 to 2012). Furthermore, the current research is
aimed to study the relationship among publication, gross domestic product (GDP) and internet usage. It is worth
to mention that the publication of the top Ten Malaysian Universities was regarded for the similar period of time.
To get the exact numbers of documents like papers, conference articles, review papers and letters which are
published by the universities in the last twenty years, the writer of the same paper used the Science Direct
database. Moreover, the data for GDP and the number of internet usage was collected through the World Bank
database (World Data Bank).To compare all kinds of publications,one-way ANOVAwas used and to investigate
the impact of economic growth and internet usageon publication productivity, multiple regression analysis was
applied.The results showed that therate of publication growth was 1.9, 20.9, and 65.5 % in top universities in the
world, ASEAN countries and Malaysia, respectively.The results also showed thatthere was a positive and
significant correlationbetween GDP and the number of internet users with the number of publications in ASEAN
and Malaysian universities. Internet usagehad much more influence in comparison withthe GDP in predicting the
number of publicationsamong these groups except for top ten Malaysian universities from 2003 to 2012. In
summary, publication trends in top ten Malaysian and ASEAN universities are promising. However, policy
makers and science managersshouldspend much more percentage of their GDP on Internet facilities and research
studies that their outputs lead to more rapid economic growthand internet usage.
Keywords: ASEAN, publication productivity, documents, internet usage, GDP, Malaysian Universities,
publication trend
1. Introduction
Ten countries in Southeast Asia formed the geo-political and economic Association of Southeast Asian Nations.
It was first formed on 8 August 1967 by Indonesia, Malaysia, the Philippines, Singapore and Thailand. Some
other countries like Brunei, Burma (Myanmar), Cambodia, Laos, and Vietnam became the members of this
group and expanded it. The chief purpose of organizing this group is increasing economic growth (Sarel, 1997).
169
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
Gross domestic product (GDP) which refers to the market price of all officially realized goods and services
which are produced in a country in a specific period of time is the chief tool to measure the economy of a
country. Reviewing the previously published papers, the writer found out that there is a relationship between
economic growth with education and paper publication. The above mentioned result is not fully support by
realistic facts (Jin, 2013; Nelson, 1966; Lucas Jr, 1988; Becker, 2009; Romer, 1990). Positive and significant
impact of education on economic growth has been found by Mankiw et al. (1992) and Barro (1991) (Mankiw et
al., 1992 & Barro, 1991); however, a possibility of reverse relationship between economic growth and education
was shown by Bils and Klenow (2000). Besides, Jin and Jin newly indicated that the effect of publication
productivity on economic growth is not the same in different fields. For example, there is a positive relationship
between engineering and science with economic growth, while the social sciences do not have the same effect on
economic growth (Jin, 2013).
Nowadays, compared with the past twenty years, studying the publication productivity is a main topic for the
researchers and students because the findings of the researches can positively affect the whole community Zain
et al., 2009). According to the recent development rules, the numbers of educated employees have been
enhanced. This matter helps the economy of the countries to grow very fast (Jin, 2013). It has been found out
that those countries which are highly developed like the United Stated and England are among the world’s top
productive research universities. The number of publication shows the research productivity and is employed to
grade the countries and universities (Yazit and Zainab, 2007, Narin and Hamilton, 1996, Toutkoushian et al.,
2003, Liu and Cheng, 2005, Meho and Spurgin). It can also be used to determine author’s productivity or the
publication productivity of research groups (Liu, 2005; Hart, 2000; Uzun, 2002; Gu, 2001; Fox, 1983).
Numerously referring to the previously published papers by the new papers a lot, shows the following
identification and also the effect in the field of study. Those review articles which refer to other articles a lot, can
give us some facts about the major areas of discipline besides, they can emphasize the increase of specific fields.
Moreover, more frequently cited papers are mostly written by famous researchers who can impress future
directions of the field by their ideas (Lefaivre and O’Brien, 2011, Kelly et al., 2010, Ponce and Lozano, 2010,
Joynt and Leonard, 1980).
Several indicators of academic or research performance are used to rank educational institutes and universities.
They includealumni and staff winning Nobel Prizes and Fields Medals, highly cited researchers, papers
published in Nature and Science, papers indexed in major citation indices, and the per capita academic
performance of an institution. The Academic Ranking of World Universities (ARWU) is the first global ranking
of universities to be published. Today, ARWU is regarded to be one of the three most influential and widely
observed international university rankings, along with the QS World University Rankings and the Times Higher
Education World University Rankings. The Academic Ranking of World Universities (ARWU), commonly
known as the Shanghai Ranking, is a publication that was founded and compiled by the Shanghai Jiao Tong
University to rank universities globally. The rankings have been conducted since 2003 and updated annually.
The current study is mainly going to investigate the amount of publication productivity among the best
universities in ASEAN countries and world’s top ten universities from 1993 to 2002 and 2003 to 2012.The study
also aimed to achieve the following objectives:


Studying the relationship among publication productivity, gross domestic product (current US $) and
internet users
Examining the publication direction of ten elite Malaysian Universities in a specific time periods
Since the Science Direct offers about 20% more inclusion than Web of Science, it has been used as the first
full-text theoretical database in this research. The researchers think that there is a positive relationship among the
economic growth, the numbers of people who can use the internet and also the number of publication of papers
in elite Asian universities and also the ten best universities in the whole world.
2. Methodology
ScienceDirect database was used to collect the number of documentsincluding articles, conference papers,
review papers, letters and books published in the last two decades from 1993 to 2002 and 2003to 2012. These
data were collectedto make a comparison among the top university in each ASEAN country, top tenuniversities
in Malaysia andtop tenuniversities in the world. To find the first university in each ASEAN countryand top
tenMalaysian universities, we used the number of publications in ScienceDirect database. Moreover,to determine
the top tenuniversities in the world, the Academic Ranking of World Universities (ARWU) was used.
Furthermore,in each university,the main subject area (overall), number of papers published in Nature and
Science journals, and the most cited paperswere identified.The numbers of citations that eachpaper could receive
170
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
during one week were identified as well.
To compare all kinds of publicationsamongtop ten universities in the world, ASEAN and Malaysia), one-way
ANOVA was applied. As the homogeneity test was not met, the Welch statistic was used to test the equality of
means. Moreover, to evaluate the relationship between publications (articles, conference papers, review papers,
letters) with GDP and Internet usage, Spearman correlation coefficient test was applied.To investigate economic
growth and internet use impact on publication productivity,multiple regression was applied to examine what
extent the proposed multiple liner regression model is supported by the research data. The regression examined
how well the number of publications could be predicted from GDP and internet usage.In the multiple regression
model, GDP and Internet usagewere set as the independent variables and the number of publications was
considered as the dependent variable.
3. Results
According to the Academic Ranking of World Universities (ARWU), the top tenuniversities in the world are
mostly located in the United States (8 universities) or United Kingdom (2 universities) (see Table 1). Moreover,
the first university in each ASEAN countryand top tenuniversities in Malaysia based on the number of
publications in ScienceDirect database were listed in Table 1.The main research areas in world’s best universities
were physics andastronomy (7 universities), medicine (2 universities) and engineering (1 university). In these
universities, the average number of papers published in Nature and Sciencewere 1586 and 1419, respectively
(see Table 1).
Table 1. The average number of published papers
Main
Institution (country)
established
Overall
Main
Subject
papers
papers
(year)
publication*
Subject area
area (%
(Nature)
(Science)
publication)
Harvard University
(US)
Stanford University
(US)
1636
74433
1891
110914
1861
134794
1868
158231
1209
135913
1891
62675
1764
62273
1754
112569
1890
90126
1096
122553
1905
74484
1949
1943
Physics and
most
most
cited
cited
paper
paper
(citation)
(citation)
22/7/2013
29/7/2013
most cited
paper
(publication
date)
14.8
1430
2294
10224
10255
1990
13.6
861
1593
6249
6266
2001
20.1
1563
1860
11678
11732
2000
15.3
1864
2233
18659
18757
1965
14.9
4099
644
7966
7977
1990
26.3
974
1134
8657
8705
1995
20.1
754
945
6123
6136
1998
Medicine
17.9
676
1403
10425
10484
1998
Medicine
21.7
980
1560
11741
11777
1953
15.2
2658
526
10198
10216
2001
Engineering
17.9
71
0
2171
2180
2003
21563
Medicine
14.9
24
0
445
449
2000
20291
Medicine
41.9
0
0
1494
1503
2005
Astronomy
Engineering
Massachusetts
Institute of
Technology (MIT)
Physics and
Astronomy
Top world's universities
(US)
University of
California, Berkeley
(US)
University of
Cambridge (UK)
California Institute
of Technology (US)
Princeton University
(US)
Columbia
University (US)
University of
Chicago (US)
University of
Oxford (UK)
Physics and
Astronomy
Physics and
Astronomy
Physics and
Astronomy
Physics and
Astronomy
Physics and
Astronomy
Universities
Top ASEAN
National University
of Singapore
(Singapore)
University of
Malaya (Malaysia)
Mahidol University
171
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
(Thailand)
Institut Teknologi
Bandung
1959
2979
1960
2955
Engineering
23.6
0
5
330
330
2001
54
15
11
857
861
1972
15.2
0
0
153
153
2000
10.4
0
0
222
224
2003
44.9
0
0
680
687
2009
26.3
0
0
171
171
2003
77
1999
(Indonesia)
International Rice
Research Institute
Agricultural
(Philippines)
Vietnam National
University
1945
1230
1985
864
(Vietnam)
UNIVERSITI
BRUNEI
DARUSSALAM
and
Biological
Computer
Science
Agricultural
and
Biological
(Brunei)
Institut Pasteur du
Cambodge
1953
251
1996
178
Medicine
(Cambodia)
National University
of Laos (Laos)
University of
Yangon (Myanmar)
University of
Malaya
Universiti Sains
Malaysia
Universiti Putra
Top Malaysian Universities
Malaysia
Agricultural
and
Biological
1878
109
Chemistry
13.6
0
1
77
1949
21572
Medicine
14.9
24
0
449
1962
17054
13.3
0
0
808
1969
16322
15.7
0
0
453
Material
Science
452
811
2004
1996
Agricultural
and
Biological
2008
455
Universiti
Kebangsaan
1970
15010
Engineering
15.5
0
0
449
Malaysia
UniversitiTeknologi
Malaysia
UniversitiTeknologi
MARA
International Islamic
University Malaysia
Multimedia
University
UniversitiTeknologi
Petronas
Universiti Malaysia
Perlis
2004
452
1975
10134
Engineering
26.1
0
0
260
1956-1965
6784
Engineering
22.5
0
0
305
1983
3995
Engineering
19.8
0
0
100
1996
3872
Engineering
27
0
0
275
1997
3343
23.5
0
0
77
2001
2321
32.6
0
0
137
Computer
science
Engineering
261
307
101
276
77
137
2004
2010
2007
2001
2008
2007
However, top universities in ASEAN countries could averagely publish 11 papers in Nature Journal and 2 papers
in Science journal. The average numbers of citations for the most cited papers in each university in these three
groups (world, ASEAN and Malaysia) were 10192, 660 and 331, respectively. Furthermore, the resultsshowed
39 citations per week for most cited papers in world's top universities while it was 4 citations per weekin
ASEAN universities (see Table 1, Figure 1).
172
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
Figure 1. Number of citations in one week for the most cited papers in each university (Top); Publication trend
(Down)
University of Singapore with 74484 papers was the first in ASEAN countries and was followed by University of
Malaya with 21563 publications among ASEAN countries. Moreover, the University of Yangon in Myanmar had
the leastpublications (109) among ASEAN countries.Interestingly, publication trends in ASEAN countries are
promising compared to the top ten universities in the world (see Appendixes 1 and 2).The rate of publication
growth between these twodecades was 1.9, 20.9 and 65.5 percentagein top universities in the world, ASEAN and
Malaysia, respectively (see Figure 1).
To compare all kinds of publications, the results of ANOVA showed that there were significant differences
among these three groups. Duncan’s multiple rangetest showed that there wasno significant difference between
ASEAN and Malaysia universities, while both of them were significantly lower than top ten world universities
(see Table 2).
Table 3 shows the correlation between indices (articles, conference papers, review papers, and letters), GDP and
internet users for these three groups. The results showedthat there was a positive and significant
correlationbetween GDP and the number of internet users with the number of publicationsin ASEAN and top ten
Malaysian universities. However, there was a negative correlationbetween GDP and internet users with the
number of letters published from 1993 to 2002 in Malaysia. Moreover, there was a negative and significant
correlationbetween GDP and the number of articles published in world's top universities (see Table 3).
TheR-squared (R2) value presented in Table 4showed some information about the goodness of fit of a model. In
regression, the R2 coefficient of determination is a statistical measure of how well the regression line
approximates the real data points. TheR2 of 1 indicates that the regression line perfectly fits the observeddata.
173
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
TheR2value of 0.584 for ASEAN universities from 1993 to 2002 implies that the two-predictor model explained
about 58.4% of the variance in publications. Table 5 revealed that based on the reported value of the F-statistic,
the model fits the data. This means that the slope of the estimated linear regression model line was not equal to
zero; thus, it confirmedthat there was a linear relationship between publication and the two-predictor variables.
Table 2. A (Welch robust Tests of Equality of Means); B (Pairwise comparison using Tamhane test)
(A)
1993-2002
Welch statistic
2003-2012
df
Sig.
Welch Statistic
df
Sig.
306.521
137.71
P<0.05
222.531
182.888
P<0.05
119.638
135.955
P<0.05
77.143
188.254
P<0.05
Review
159.91
142.948
P<0.05
221.178
160.595
P<0.05
Letter
143.944
168.747
P<0.05
101.268
156.708
P<0.05
Total publication
319.809
137.21
P<0.05
230.36
180.99
P<0.05
Article
Conference
(B)*
2
2
2003-201
1993-200
Decade
Article
Conference
Review
Letter
Total
publication
Top world
2030.73a
264.1a
124.6a
23.93a
2443.36a
ASEAN
222.98b
28.76b
6.55b
1.65b
259.94b
Malaysia
66.58c
8.87b
2.13b
0.78b
78.36c
Top world
3304.72a
776.77a
374.79a
58.3a
4514.58a
ASEAN
574.85b
166.54b
35b
7.04b
783.43b
Malaysia
509.03b
235.81b
22.39b
3.31b
770.54b
Means with same letter are not significantly different in each column.
*data are log transformed.
Table 3. Correlation coefficient between the number of articles, conference papers, review papers and letters with
GDP and internet usage for Top ten universities of the world, ASEAN and Malaysia
1993-2002
WOELD
ASEAN
MALAYSIA
2003-2012
GDP
internet
GDP
internet
Article
-0.042
.229*
-0.004
.531**
Conference
0.178
.232*
0.012
0.158
Review
0.185
.497**
-.443**
-0.062
Letter
-0.128
0.137
-.256*
.324**
Article
.723**
.611**
.779**
.800**
Conference
.775**
.599**
.737**
.739**
Review
.565**
.574**
.518**
.705**
Letter
.416**
.416**
.415**
.567**
Article
0.133
.280**
.624**
.595**
Conference
.249*
.400**
.879**
.876**
Review
0.09
0.171
.530**
.442**
Letter
-0.03
-0.008
.338**
.258*
**. Correlation is significant at the 0.01 level (2-tailed).
*. Correlation is significant at the 0.05 level (2-tailed).
174
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
Table 4. Results of multiple linear regression between GDP and internet usage with total number of publications
for Top ten universities of the world, ASEAN and Malaysia, respectively
1993-2002
Unstandardized Coefficients
group
B
Standardized Coefficients
Std. Error
t
Sig.
R2
0.187
Beta
Top world
GDP*
-1149.03
279.863
-0.408
-4.106
P<0.05
internet
21.03
4.943
0.423
4.254
P<0.05
GDP*
197.843
70.785
0.214
2.795
P<0.05
internet
43.194
4.661
0.708
9.267
P<0.05
GDP*
100.412
164.662
0.066
0.61
0.543
internet
1.771
0.888
0.216
1.994
P<0.05
Standardized Coefficients
t
ASEAN
0.584
Malaysia
0.063
* log transformed.
2003-2012
Unstandardized Coefficients
group
B
Std. Error
Sig.
R2
Beta
Top world
GDP*
-1461.77
446.388
internet
121.544
25.935
GDP*
363.724
161.614
-3.275
P<0.05
0.425
4.687
P<0.05
0.177
2.251
P<0.05
-0.297
0.276
ASEAN
internet
42.706
4.983
0.676
8.57
P<0.05
GDP*
6219.755
1339.622
1.141
4.643
P<0.05
internet
-50.131
23.01
-0.535
-2.179
0.102
0.551
Malaysia
0.421
* log transformed.
Standardized regression coefficients are presented in Table 4 to explain the importance of two predictors in
predicting the number of publications. Independent variable with a high beta coefficient is highly important in
contributing to the prediction of the number of total publications. Based on the beta values obtained, the beta
coefficient in theworld’s top university was -0.408 and -0.297 for GDP and 0.423 and 0.425 for Internet
usagebetween 1993-2002 and 2003-2012, respectively. This means that Internet usagehad a much more
significant influence than the GDP in predicting the number of publications. The results showedthe same issuefor
ASEAN universities meaning thatin these two decades, Internet usagehad much more significant effects than the
GDP in predicting the number of publications. Interestingly, GDP had more power (1.141) than the number of
internetusers (-0.535) from 2003 to 2012 in predicting the number of publications in top ten Malaysian
universities.
4. Discussion and Conclusion
In this study, a comparisonwas made among the top university in each ASEAN country, top ten Malaysian
universities and the world’s top universitiesregarding the relationship between economic growth and internet
usagewithpublication productivity from 1993 to 2002 and 2003 to 2012. Shanghai Ranking (ARWU) was used to
determine the top ten universities in the world as it is the first global ranking of universities to be publishedand
one of the three most influential and widely observed international university rankings. Moreover, the numbers
of publications were used to find the first university in each ASEAN country and top ten Malaysian universities
because most of these universities were not listed in the ARWU, QS, or the Times Higher Education World
175
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
University Rankings.
Publication productivity is an indicator of research output and could be used to rank countries and universities
(Yazit and Zainab, 2007, Narin and Hamilton, 1996, Toutkoushian et al., 2003, Liu and Cheng, 2005, Meho and
Spurgin). It can also be used to determine author’s productivity or the publication productivity of research
groups and to assess the productivity of persons in a particular discipline (Liu and Cheng, 2005, Hart, 2000,
Uzun, 2002, Gu and Zainab, 2001, Fox, 1983, Yi et al., 2008). World's best universities are mostly working on
physics andastronomy. Institutes whichare working in these fields could publish more papers and get more
citations. They could publish 144.2% and 709.5% more papers comparedto ASEAN universities in Nature and
Science journals, respectively (see Table 1). They could alsoreceive 9.8 times more citations per week for their
most cited papers (see Table 1).
Table 5. ANOVA table
1993
ANOVAb
group
world
asean
my
Model
1
1
1
Sum of Squares
df
Mean Square
Regression
1.756E7
2
8781193.216
Residual
6.875E7
97
708732.439
Total
8.631E7
99
Regression
1.656E7
2
8279590.609
Residual
1.126E7
71
158575.023
Total
2.782E7
73
57700.227
2
28850.113
Residual
853014.813
97
8793.967
Total
910715.040
99
Regression
F
Sig.
12.390
.000a
52.212
.000a
3.281
.042a
a. Predictors: (Constant), internet, log GDP.
b. Dependent Variable: total publication.
2003
ANOVAb
group
world
asean
my
Model
1
1
1
Sum of Squares
df
Mean Square
Regression
5.958E7
2
2.979E7
Residual
1.440E8
87
1655372.703
Total
2.036E8
89
Regression
9.489E7
2
4.745E7
Residual
7.379E7
78
946035.347
Total
1.687E8
80
Regression
2.040E7
2
1.020E7
Residual
2.658E7
87
305522.329
Total
4.698E7
89
a. Predictors: (Constant), internet, log GDP.
b. Dependent Variable: total publication.
176
F
Sig.
17.996
.000a
50.153
.000a
33.387
.000a
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
Publication trendfrom 1993 to 2012 in ASEAN and top ten Malaysian universitieswith 20.9 and 65.5% growth
ispromising compared to the world’s top ten universities (1.9%). ASEAN and Malaysian universities havebeen
averagely established over the past 66 and 37 years ago, respectively; while the top universities in the world have
been averagely stabilised 327 years ago. This can be one of the reasons of high trend in publication productivity
in ASEAN and Malaysian universities. In addition, the growth of publications might be overestimated due to the
omitted variables, and a reverse causality from GDP to publications will be another possibility to occur.The
number of publications in Malaysia increased dramatically after 2007 to 2012 (almost 5 times) (see Appendixes
1 and 2). One of the main reasons for increasing the number of paper publications in Malaysia could be greatly
concentrating on enhancing the quality of research in the universities which are specific for researching like
University of Malaya. Specified in the 10th Malaysia Plan, 1% of Malaysia GDP will be spent on the
development and research projects. Furthermore, the other important reason could be the change of dissertations
direction from conventional into research-based. PhD students have enhanced 10 times in the past years (there
were about 4000 students in 2002, while they have increased to 40,000 in 2012). The top ten universities in
Malaysia are shown in Table 1. It should be mentioned that the first five universities shown are ranked as the
research universities and get more government funding.
Thefindings of this study showed a positive and significant correlationbetween GDP and the number of
publications in ASEAN and Malaysian universities. This finding was in line withthe findings of previous
research studies that showedpositive and significant correlationbetween education and economic growth
(Mankiw et al., 1992; Barro, 1991; Bils and Klenow, 2000). Interestingly, there is a negative and significant
correlation between the number of publications (articles and letters) and economic growth in top ten Malaysian
universities from 1993 to 2012. Besides, Jin and Jin showed in their recent study that the publication productivity
affect the economic growth differently in various fields.In top ten universities located in the United States and
United Kingdom, GDP could not have positive effects to increase the number of publications. While the
significant effects of GDP on the number of publications were seen in Malaysian universities especially from
2003 to 2012.
However, it should be noted that only the effects of GDP and internet usageon publications of recent years were
evaluated in this study. Since the educational effects are accomplished over longer horizons, further investigation
of the educational effects using the data that correspond to the stock of publications in earlier years would be
important.
Science managers and policy makers in ASEAN countries shouldspend much more percentage of their GDP on
development and research projects as Malaysia does.Moreover, increasing the number of PhD students and
changing the universityprograms to paper-based dissertations could be another solution to increase the number of
publications. Furthermore, it is necessary to find out different ways to improve the quality and visibility of
theresearch studiesand invest more on research studies that their outputs lead to more rapid economic growth.
References
Barro, R. J. (1991). Economic growth in a cross section of countries. The Quarterly Journal of Economics, 106,
407-443. http://dx.doi.org/10.2307/2937943
Becker, G. S. (2009). Human capital: A theoretical and empirical analysis, with special reference to education.
University of Chicago Press.
Bils, M., & Klenow, P. J. (2000). Does schooling cause growth? American Economic Review, 1160-1183.
http://dx.doi.org/10.1257/aer.90.5.1160
Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, web of
science, and Google scholar: Strengths and weaknesses. The FASEB Journal, 22, 338-342.
http://dx.doi.org/10.1096/fj.07-9492LSF
Fox, M. F. (1983). Publication productivity among scientists: A critical review. Social Studies of Science, 13,
285-305. http://dx.doi.org/10.1177/030631283013002005
Gu, Y., & Zainab, A. (2001). Publication productivity of Malaysian researchers in the field of Computer Science
and Information Technology. Malaysian Journal of Library & Information Science, 6, 1-23.
Hart, R. L. (2000). Co-authorship in the academic library literature: A survey of attitudes and behaviors. The
Journal of Academic Librarianship, 26, 339-345. http://dx.doi.org/10.1016/S0099-1333(00)00140-3
Jin, J. C., & Jin, L. (2013). Research publications and economic growth: Evidence from cross-country
regressions. Applied Economics, 45, 983-990. http://dx.doi.org/10.1080/00036846.2011.613785
177
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
Joynt, R. L., & Leonard, J. A. (1980). Dantrolene sodium suspension in treatment of spastic cerebral palsy.
Developmental
Medicine
&
Child
Neurology,
22,
755-767.
http://dx.doi.org/10.1111/j.1469-8749.1980.tb03742.x
Kelly, J., Glynn, R., O’briain, D., Felle, P., & Mccabe, J. (2010). The 100 classic papers of orthopaedic surgery a
bibliometric analysis. Journal of Bone & Joint Surgery, British Volume, 92, 1338-1343.
http://dx.doi.org/10.1302/0301-620X.92B10.24867
Lefaivre, K. A., & O’brien, P. J. (2011). 100 most cited articles in orthopaedic surgery. Clinical Orthopaedics
and Related Research®, 469, 1487-1497. http://dx.doi.org/10.1007/s11999-010-1604-1
Liu, N. C., & Cheng, Y. (2005). The academic ranking of world universities. Higher education in Europe, 30,
127-136. http://dx.doi.org/10.1080/03797720500260116
Lucas, Jr, R. E. (1988). On the mechanics of economic development. Journal of monetary economics, 22, 3-42.
http://dx.doi.org/10.1016/0304-3932(88)90168-7
Mankiw, N. G., Romer, D., & Weil, D. N. (1992). A contribution to the empirics of economic growth. The
Quarterly Journal of Economics, 107, 407-437. http://dx.doi.org/10.2307/2118477
Meho, L. I., & Spurgin, K. M. (2005). Ranking the research productivity of lis faculty and schools: An
evaluation of data sources and research methods. Journal of the American Society for Information Science
and Technology, 56(12), 1314–1331.
Narin, F., & Hamilton, K. S. (1996). Bibliometric performance measures. Scientometrics, 36, 293-310.
http://dx.doi.org/10.1007/BF02129596
Nelson, R. R., & Phelps, E. S. (1966). Investment in humans, technological diffusion, and economic growth. The
American Economic Review, 56, 69-75.
Ponce, F. A., & Lozano, A. M. (2010). Highly cited works in neurosurgery. Part I: the 100 top-cited papers in
neurosurgical
journals:
A
review.
Journal
of
neurosurgery,
112,
223-232.
http://dx.doi.org/10.3171/2009.12.JNS091599
Romer, P. M. (1990). Endogenous technological change. Journal of political Economy, S71-S102.
http://dx.doi.org/10.1086/261725
Sarel, M. (1997). Growth and productivity in ASEAN countries. International Monetary Fund.
Toutkoushian, R. K., Porter, S. R., Danielson, C., & Hollis, P. R. (2003). Using publications counts to measure
an institution's research productivity. Research in Higher Education, 44, 121-148.
http://dx.doi.org/10.1023/A:1022070227966
Uzun, A. (2002). Productivity ratings of institutions based on publication in Scientometrics, Informetrics, and
Bibliometrics, 1981–2000. Scientometrics, 53, 297-307. http://dx.doi.org/10.1023/A:1014864827441
Yazit, N., & Zainab, A. (2007). Publication productivity of Malaysian authors and institutions in LIS. Malaysian
Journal of Library and Information Science, 12.
Yi, H., Ao, X., & Ho, Y. S. (2008). Use of citation per publication as an indicator to evaluate pentachlorophenol
research. Scientometrics, 75, 67-80. http://dx.doi.org/10.1007/s11192-007-1849-y
Zain, Z. M., Ishak, R., & Ghani, E. K. (2009). The influence of corporate culture on organisational commitment:
A study on a Malaysian listed company. European Journal of Economics, Finance and Administrative
Sciences, 17, 16-26.
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution
license (http://creativecommons.org/licenses/by/3.0/).
178
International Education Studies; Vol. 6, No. 11; 2013
ISSN 1913-9020
E-ISSN 1913-9039
Published by Canadian Center of Science and Education
Effective Strategies for Increasing Citation Frequency
Nader Ale Ebrahim1, Hadi Salehi2, Mohamed Amin Embi3, Farid Habibi Tanha4, Hossein Gholizadeh5, Seyed
Mohammad Motahar6 & Ali Ordi7
1
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring (IPPP),
University of Malaya, Malaysia
2
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
3
Faculty of Education, Universiti Kebangsaan Malaysia, Bangi, 43600, Malaysia
4
Department of Financial Sciences , University of Economic Sciences, Tehran, 1593656311, Iran
5
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur,
Malaysia
6
Faculty of Information Science and Technology,Universiti Kebangsaan Malaysia, Bangi, 43600, Malaysia
7
Universiti Teknologi Malaysia, Advance Informatics School (AIS), Kuala Lumpur, Malaysia
Correspondence: Nader Ale Ebrahim, Research Support Unit, Centre of Research Services, Institute of Research
Management and Monitoring (IPPP), University of Malaya, 50603, Kuala Lumpur, Malaysia. Tel:
603-7967-7355. E-mail: aleebrahim@um.edu.my
Received: September 12, 2013
doi:10.5539/ies.v6n11p93
Accepted: October 12, 2013
Online Published: October 23, 2013
URL: http://dx.doi.org/10.5539/ies.v6n11p93
Abstract
Due to the effect of citation impact on The Higher Education (THE) world university ranking system, most of
the researchers are looking for some helpful techniques to increase their citation record. This paper by reviewing
the relevant articles extracts 33 different ways for increasing the citations possibilities. The results show that the
article visibility has tended to receive more download and citations. This is probably the first study to collect
over 30 different ways to improve the citation record. Further study is needed to explore and expand these
techniques in specific fields of study in order to make the results more precisely.
Keywords: university ranking, improve citation, citation frequency, research impact, open access, h-index
1. Introduction
The research output is an essential part of an institution’s measure and evaluation of research quality. Previously,
the number of publication and journal impact factors were the means to derive research ratings. Recent approach
for rating research quality rely on the number of citations per article. Citation shows that how many times an
article has been cited by other articles (Fooladi et al. 2013). Citations to research publications are a quality
indicator, important for both the author and the affiliated university (Jones and Evans 2013). Most researchers
are evaluated based upon their publications as well as the numbers of citations their publications receive. One of
the key ways to increase citations is to expose the research output to the widest possible audience. If people are
unaware of the research, they won’t be citing it. The more researchers and students in other fields who have
heard about the research, the researcher will receive more citations. Citations to an article might strongly depend
on the visibility, rather than the merit of the article (Marashi et al. 2013). Ale Ebrahim (2012) argues that
publishing a high quality paper in scientific journals will be a halfway of receiving citation in the future. The rest
of the way is advertising and disseminating the publications by using the proper “Research Tools”. Post-print
publishing means to make peer-reviewed, published research articles freely available to anyone with an internet
connection, often greatly increases the citation frequency of articles (LiU E-Press 2007). The following section
introduces 33 ways which may help the researchers to improve the number of their citations.
93
www.ccsenet.org/ies
International Education Studies
Vol. 6, No. 11; 2013
2. 33 Ways for Improving Citations
1. Use A Unique Name Consistently Throughout Academic Careers
Authors are highly advised to use the same variation of their name consistently throughout their academic
careers. If the name is a common name, consider adding your full middle name to distinguish it from other
authors. Consistency enhances retrieval (Sarli and Holmes 2011).
2. Use a standardized institutional affiliation and address, using no abbreviations (Sarli and Holmes 2011).
Standardization of author affiliation is important to make sure work can be attributed to the correct author and
institution (Jones and Evans 2013). Providing accurate contact details are essential so that researchers can
contact directly for queries, further information and discussions about the publication (Wong 2008).
3. Repeat key phrases in the abstract while writing naturally.
Make some key phrases of your study and repeat them in the abstract page of your paper. Since search engines
and citation trackers search the abstract of your article, the normal repetition of key words increases the chance
of your paper to be retrieved more easily (Sarli and Holmes, 2011; Jones and Evans, 2013).
4. Assign keyword terms to the manuscript (Sarli and Holmes 2011).
Using keywords is a vital part of abstract writing, because of the practice of retrieving information electronically:
keywords act as the search term. Use keywords that are specific, and that reflect what is essential about the paper.
Put yourself in the position of someone researching in your field: what would you look for? Consider also
whether you can use any of the current "buzzwords" (Emerald Guide 2012).
5. Make a unique phrase that reflects author's research interest and use it throughout academic life.
Add the name of study in the title of all publications and use the same title/ name consistently (Sarli and Holmes
2011).
6. Publish in journal with high impact factor (Vanclay 2013).
The most effective strategy to increase citation rates is publishing in a journal with higher impact factor (Vanclay
2013). Dhawan and Gupta (2005) studied 1101 papers and found that articles published in high impact factor
journals increase the probability of getting cited.
7. Self-archive articles.
Free online availability increases a paper's impact (Lawrence 2001); therefore, maximize the visibility of your
research by making copies of your articles available online (Jones and Evans 2013). Gargouri et al. (2010) have
made a strong and a declarative link between self-archiving and increased citation performance.
8. Keep your professional web pages and published lists up to date (Jones and Evans 2013).
The advantage of self-archive on the web and make a link between published lists is obvious. Freely accessible
articles increase citations by 50% or more (Harnad 2006).
9. Make your research easy to find, especially for online searchers (Jones and Evans 2013).
Jamali and Nikzad (2011) investigated 2172 articles and found that there is a positive relationship between the
number of downloads and citations. Research shows that there is a correlation between highly cited articles and
the likelihood of it being online (Vaughan and Shaw 2003).
10. Open Access (OA) increases citation rate (MacCallum and Parthasarathy 2006).
Free access increases citation rates, searching online is more efficient and following hyperlinks quickly leads
researchers to their prevailing opinion (Evans 2008). Open Access has a positive impact on growth of citations
(Table 1) (Swan 2010).
11. Deposit paper in Open Access repository (Frost 2009).
For example, Ray Frost is a chemist who publishes prolifically. About three years ago, he began depositing his
articles in the Queensland University of Technology (QUT) repository. So far, he has deposited over 300 of his
articles. Figure 1 (derived data from the Web of Science) shows the patterns of publication and citations to those
publications. When Ray started putting his articles into the QUT repository, the numbers of citations began to
take off. The latest count is 1200 in one year. Even though Ray’s publication rate went up a bit over this period,
the increase in citations is impressive (Frost 2009).
94
www.ccsenet.org/ies
International Education Studies
Vol. 6, No. 11; 2013
Table 1. Effect of Open Access (OA) to increase the level of citations (Swan 2010)
Size of OA citation advantage when found
(and where explicitly stated by discipline)
% increase in citations with Open Access
Physics/astronomy
Mathematics
Biology
Electrical engineering
Computer science
Political science
Philosophy
Medicine
Communication studies (IT)
Agricultural sciences
170 to 580
35 to 91
-5 to 36
51
157
86
45
300 to 450
200
200 to 600
Figure 1. The patterns of citations to Ray Frost’s articles that are deposited in the Queensland University of
Technology (QUT) repository (Frost 2009)
12. Publish with international authors (Pislyakov and Shukshina 2012)
Citation analysis shows that papers with international co-authors are cited up to four times more often than those
without international co-authors (Jones and Evans 2013). Krause (2009) argued that articles published with
multi-countries or multi-institutes collaborations get cited more. Authors who are often involved in international
collaboration received more citations (Aksnes 2003).
13. Team-authored articles get cited more (Krause 2009)
Wuchty et al. (2007) have used 19.9 million papers over 5o years and demonstrated that team-authored articles
typically produce more frequently cited research than individuals. A recent study by Cotropia and Petherbridge
(2013) in law review articles which were published within two decades also demonstrated that team research is
on average more frequently cited than individual research. Typically high cited articles are authored by a large
number of scientists (Aksnes 2003).
14. Use more references
There is a ridiculously strong relationship between the number of citations a paper receives and the number of its
references (Corbyn 2010).
15. Publish a longer paper
A longer paper gathers more citations (Ball 2008; Abt 1998). Hamrick et al. (2010) indicated that longer papers
are associated with more citations.
16. Publish papers with a Nobel laureates (Ball 2011)
95
www.ccsenet.org/ies
International Education Studies
Vol. 6, No. 11; 2013
Some landmark papers of Nobel laureates quite quickly give their authors a sudden boost in citation rate and this
boost extends to the author's earlier papers too, even if they were in unrelated areas (Ball 2011).
17. Contribute to Wikipedia (SAGE 2012)
Try to contribute in Wikipedia. As a good example, one paper (Nader Ale Ebrahim et al. 2009) that was used as
a reference in defining virtual teams in Wikipedia has received significant citations in comparison to the rest of
the articles from the same author.
18. Start blogging (SAGE 2012)
Use blogs and podcasts to leverage on-going researcher discussion on the Internet (Taylor & Francis Group
2012a). Web 2.0 tools such as wikis and blogs can be created to inform, describe and link people’s research
interests and publications (Wong 2008). Authors are encouraged to promote their papers through the addition of
links which web search engines such as Google take particular notice for their page ranks (Smith 2005).
19. Join academic social networking sites (Taylor & Francis Group 2012b)
Increasing the availability of articles through social networking sites broadens dissemination, increases use, and
enhances professional visibility which lead to increased citations and usage. Academica is an online social
reference tool that allows reference sharing among academics and researchers. Alternatively, researchers may
use Citeulike to share their interests in research publications (Wong 2008). Academica, Citeulike, ResearchGate
and Linkedin are just a few examples of knowledge sharing tools to make others aware of research articles that
may be of relevance to authors and hence get cited.
20. Write a review paper
Reviews are more likely to be cited than original research papers. Some types of articles including editorials,
letters to editors, news items, meeting abstracts, and case studies are generally poorly cited (Taylor & Francis
Group 2012a). Authors seeking to be well cited should aim to write comprehensive and substantial review
articles, and submit them to journals with a high impact factor that carry previous articles on the topic (Vanclay
2013).
21. Papers published after having first been rejected elsewhere receive significantly more citations (Ball 2012)
Submission history affected post-publication impact, resubmissions from other journals received significantly
more citations than first-intent submissions (Calcagno et al. 2012).
22. Papers with a larger number of “callouts” be likely to receive a higher number of citations (Hamrick et al.
2010)
A “callout” is a phrase or sentence from the paper that is displayed in a different font, somewhere in the paper.
Generally, callouts are inserted by the editorial staff to call attention to potentially interesting aspects of a paper
(Hamrick et al. 2010).
23. Avoid to select a question type of title
Jamali and Nikzad (2011) investigated 2172 articles and found different types of title effects on the number of
downloads and citations. Especially articles with question type titles tend to be downloaded more but cited less
than the others.
24. Sharing detailed research data
Open data (Publicly-available datasets) are significantly associated with a 69% increase in citations to articles
that accompany the data. This correlation is independent of Journal Impact Factor, country of authors and time
since publication (Piwowar et al. 2007; Swan 2010).
25. Publish across disciplines
Publishing across disciplines has been found to increase citation e.g. chemistry, biological science and physics
(Ortega and Antell 2006).
26. Present a working paper (Taylor & Francis Group 2012a)
Try to go to a prestigious conference and present some parts of your research or publish working paper. Working
papers are freely available before and after the articles are published. Researchers may upload their working
papers into open access repositories including the personal websites or more formal repositories such as arXiv
and SSRN (McCabe 2011).
27. Publish your article in one of the journals everyone in your discipline reads (Taylor & Francis Group 2012a)
96
www.ccsenet.org/ies
International Education Studies
Vol. 6, No. 11; 2013
Choosing a journal that matches with a researcher’s field of study is thus very important because it makes it
more likely that the article receives more citation. A journal which covers a broad range of disciplines may be
the best.
28. Publicize yourself - link your latest published article to your email signature (Taylor & Francis Group 2012a)
A great way to spread researchers’ outputs and get extra attention of email recipient is to add a link to the latest
publication. This little section of contact information that most people ignore, provides a good platform for
publication marketing.
29. Publish your work in a journal with the highest number of abstracting and indexing (Nader Ale Ebrahim
2012)
Abstracting and indexing services generate visibility for the material published in most journals (Garfield and
Merton 1979). Citation potential increases by attributing to the high visibility of scientific materials. Therefore, a
journal with the highest number of abstracting and indexing in different databases can be a good target.
30. Create a podcast describing the research project
Research is not just text and figures. Create a podcast describing the research project and submit the podcast to
YouTube or Vimeo (Sarli and Holmes 2011). Podcasts can describe the research efforts. Video is an increasingly
important way for researchers to communicate their results and welcome submissions of podcasts from authors
and editors (Sarli and Holmes 2011).
31. Make an online CV Like ORCID or ResearcherID
Online CV makes a link between the list of published papers and open access versions of relevant articles (Sahu
2005). Online CV increases researchers’ output visibility to the academic community.
32. Publish tutorials papers
Tutorial paper is “a paper that organizes and introduces work in the field. A tutorial paper assumes its audience
is inexpert; it emphasizes the basic concepts of the field and provides concrete examples that embody these
concepts (ACM 2013)”. Tutorials papers tend to have a higher number of citations (Hamrick et al. 2010).
and finally;
33.
Use
all
“Enhancing
Visibility
and
Impact”
tools
http://www.mindmeister.com/39583892/research-tools-by-nader-ale-ebrahim
which
are
available
on
Familiarity with academic advertisement tools allows the researcher to increase his/her h-index in the short time.
H-index shows the academicians’ influences in the specified field of research (Aghaei Chadegani et al. 2013).
Therefore, a person with higher levels of h-index has higher quality publications with high amount of citations
(Nader Ale Ebrahim 2012). The advertisement section of the above mind map includes the tools which can assist
the researchers to disseminate and increase visibility of their published papers.
3. Conclusion
Publishing research output in high-impact journals is a primary concern of the researchers. The researchers also
need to consider different ways to receive more citations after publishing a paper. When their works are
published, they are concerned about citation which is directly related to the paper’s quality and visibility. The
researchers cannot increase the quality of their published papers; therefore, they can apply some of these 33 key
points to increase the visibility of their published papers.
References
Abt, H. A. (1998). Why some papers have long citation lifetimes. [10.1038/27355]. Nature, 395(6704), 756-757.
http://dx.doi.org/10.1038/27355
ACM. (2013). ACM Computing Surveys. Retrieved May 30, 2013, from http://csur.acm.org/author_info.html
Aghaei Chadegani, A., Salehi, H., Yunus, M. M., Farhadi, H., Fooladi, M., & Farhadi, M., et al. (2013). A
Comparison between Two Main Academic Literature Collections: Web of Science and Scopus Databases.
Asian Social Science, 9(5), 18-26. http://dx.doi.org/10.5539/ass.v9n5p18
Aksnes, D. W. (2003). Characteristics of highly cited papers. Research Evaluation 12(3), 159-170.
http://dx.doi.org/10.3152/147154403781776645
Ale Ebrahim, N. (2012). Publication Marketing Tools “Enhancing Research Visibility and Improving Citations”.
Research Tools in Education Series, 1(2), 1-86.
97
www.ccsenet.org/ies
International Education Studies
Vol. 6, No. 11; 2013
Ale Ebrahim, N., Ahmed, S., & Taha, Z. (2009). Virtual R & D teams in small and medium enterprises: A
literature review. [Review]. Scientific Research and Essay, 4(13), 1575–1590.
Ball,
P. (2008). A longer paper
http://dx.doi.org/10.1038/455274a
gathers
more
citations.
Nature
455(7211),
274-275.
Are scientific reputations boosted artificially? (2011, 6 May). Nature.
Ball, P. (2012, 11 October). Rejection improves eventual impact of manuscripts. Nature.
Calcagno, V., Demoinet, E., Gollner, K., Guidi, L., Ruths, D., & de Mazancourt, C. (2012). Flows of Research
Manuscripts Among Scientific Journals Reveal Hidden Submission Patterns. Science, 338(6110),
1065-1069. http://dx.doi.org/10.1126/science.1227833
Corbyn, Z. (2010). An easy way to boost a paper's citations. Nature. http://dx.doi.org/10.1038/news.2010.406
Cotropia, C. A., & Petherbridge, L. (2013). The Dominance of Teams in the Production of Legal Knowledge.
Loyola-LA Legal Studies.
Dhawan, S., & Gupta, B. (2005). Evaluation of Indian physics research on journal impact factor and citations
count: A comparative study. DESIDOC Journal of Library & Information Technology, 25(3), 3-7.
Emerald Guide. (2012). How to... write an abstract. Retrieved
http://www.emeraldinsight.com/authors/guides/write/abstracts.htm?part=1
May
9,
2013,
from
Evans, J. A. (2008). Electronic Publication and the Narrowing of Science and Scholarship. Science, 321(5887),
395-399. http://dx.doi.org/10.1126/science.1150473
Fooladi, M., Salehi, H., Yunus, M. M., Farhadi, M., Aghaei Chadegani, A., Farhadi, H., et al. (2013). Do
Criticisms Overcome the Praises of Journal Impact Factor? Asian Social Science, 9(5), 176-182,
http://dx.doi.org/10.5539/ass.v9n5p176
Frost, R. (2009). Case study: Open Access visibility and impact of an individual researcher. Retrieved May 9,
2013,
from
http://www.openscholarship.org/jcms/c_6220/case-study-open-access-visibility-and-impact-of-an-individua
l-researcher
Garfield, E., & Merton, R. K. (1979). Perspective on Citation Analysis of Scientists. In Citation indexing: Its
theory and application in science, technology, and humanities (Vol. 8). Wiley New York.
Gargouri, Y., Hajjem, C., Larivière, V., Gingras, Y., Carr, L., Brody, T., et al. (2010). Self-Selected or Mandated,
Open Access Increases Citation Impact for Higher Quality Research. PLoS ONE, 5(10), e13636.
http://dx.doi.org/10.1371/journal.pone.0013636
Hamrick, T. A., Fricker, R. D., & Brown, G. G. (2010). Assessing What Distinguishes Highly Cited from
Less-Cited
Papers
Published
in
Interfaces.
Interfaces
40(6),
454-464.
http://dx.doi.org/10.1287/inte.1100.0527
Harnad, S. (2006). Publish or perish—self-archive to flourish: the green route to open access. ERCIM News, 64.
Jamali, H. R., & Nikzad, M. (2011). Article title type and its relation with the number of downloads and citations.
Scientometrics, 88(2), 653-661. http://dx.doi.org/10.1007/s11192-011-0412-z
Jones, K., & Evans, K. (2013). Good Practices for Improving Citations to your Published Work (pp. 2).
University of BATH.
Krause, K. (2009). Increasing your Article's Citation Rates. Open Access Week.
Lawrence, S. (2001). Free online availability substantially increases a paper's impact. [10.1038/35079151].
Nature, 411(6837), 521-521.
LiU E-Press (2007). One way to increase citation frequency. Retrieved May 9, 2013, from
http://www.ep.liu.se/authorinf/postpubl.en.asp
MacCallum, C. J., & Parthasarathy, H. (2006). Open Access Increases Citation Rate. PLoS Biol, 4(5), e176,
http://dx.doi.org/10.1371/journal.pbio.0040176
Marashi, S.-A., Seyed Mohammad Amin, H.-N., Alishah, K., Hadi, M., Karimi, A., Hosseinian, S., et al. (2013).
Impact
of
Wikipedia
on
citation
trends.
EXCLI
Journal,
12,
15-19.
http://dx.doi.org/citeulike-article-id:12202824
98
www.ccsenet.org/ies
International Education Studies
Vol. 6, No. 11; 2013
McCabe, M. J. (2011). Online Access and the Scientific Journal Market: An Economist’s Perspective. (Vol.
Draft Report for the National Academy of Sciences, pp. 1-36). University of Michigan and SKEMA
Business School.
Ortega, L., & Antell, K. (2006). Tracking Cross-Disciplinary Information Use by Author Affiliation:
Demonstration of a Method. College & Research Libraries, 67(5), 446-462.
Pislyakov, V., & Shukshina, E. (2012). Measuring Excellence in Russia: Highly Cited Papers, Leading
Institutions, Patterns of National and International Collaboration. Paper presented at the Proceedings of
STI 2012, Montréal.
Piwowar, H. A., Day, R. S., & Fridsma, D. B. (2007). Sharing Detailed Research Data Is Associated with
Increased Citation Rate. PLoS ONE, 2(3), 308. http://dx.doi.org/10.1371/journal.pone.0000308
SAGE (2012). 10 Ways to Increase Usage and Citation of your Published Article Using Social Media. Retrieved
May 9, 2013, from http://www.sagepub.com/authors/journal/10ways.sp
Sahu, D. (2005). Open Access: Why India Should Brace it? In (pp. 1-49).
Sarli, C., & Holmes, K. (2011). Strategies for Enhancing the Impact of Research. Retrieved May 9, 2013, from
https://becker.wustl.edu/impact-assessment/strategies
Smith, A. G. (2005). Citations and Links as a Measure of Effectiveness of Online LIS Journals. IFLA Journal,
31(1), 76-84. http://dx.doi.org/10.1177/0340035205052651
Swan, A. (2010). The Open Access citation advantage: Studies and results to date.
Taylor & Francis Group (2012a). Optimize citations. Retrieved
http://journalauthors.tandf.co.uk/beyondpublication/optimizingcitations.asp
Taylor & Francis Group (2012b). Promote your article. Retrieved
http://journalauthors.tandf.co.uk/beyondpublication/promotearticle.asp
May
May
9,
9,
2013,
from
2013,
from
Vanclay, J. K. (2013). Factors affecting citation rates in environmental science. Journal of Informetrics, 7(2),
265-271. http://dx.doi.org/10.1016/j.joi.2012.11.009
Vaughan, L., & Shaw, D. (2003). Bibliographic and Web citations: What is the difference? Journal of the
American
Society
for
Information
Science
and
Technology,
54(14),
1313-1322.
http://dx.doi.org/10.1002/asi.10338
Wong, R. (2008). Ways to Maximise Citations for Researchers (pp. 1-7). University of Sheffield.
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The Increasing Dominance of Teams in Production of Knowledge.
Science, 316(5827), 1036-1039. http://dx.doi.org/10.1126/science.1136099
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution
license (http://creativecommons.org/licenses/by/3.0/).
99
Modern Applied Science; Vol. 8, No. 5; 2014
ISSN 1913-1844 E-ISSN 1913-1852
Published by Canadian Center of Science and Education
Equality of Google Scholar with Web of Science Citations: Case of
Malaysian Engineering Highly Cited Papers
Nader Ale Ebrahim1, Hadi Salehi2, Mohamed Amin Embi3, Mahmoud Danaee4, Marjan Mohammadjafari5,
Azam Zavvari6, Masoud Shakiba6 & Masoomeh Shahbazi-Moghadam7
1
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring (IPPP),
University of Malaya, Malaysia
2
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
3
Faculty of Education, Universiti Kebangsaan Malaysia (UKM), Bangi, 43600, Malaysia
4
Faculty of Agriculture, Roudehen Branch, Islamic Azad University, Roudehen, Iran
5
Department of Industrial Engineering, Faculty of Engineering, Science and Research Branch, Islamic Azad
University, Kerman, Iran
6
Center for Software Technology and Management, Faculty of Information Science and Technology, Universiti
Kebangsaan Malaysia, 43600 UKM Bangi, Selangor, Malaysia
7
Perdana School of Science, Technology and Innovation Policy, Universiti Teknologi Malaysia
Correspondence: Hadi Salehi, Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University,
Najafabad, Isfahan, Iran. E-mail: hadisalehi1358@yahoo.com
Received: June 1, 2014
doi:10.5539/mas.v8n5p63
Accepted: June 23, 2014
Online Published: August 6, 2014
URL: http://dx.doi.org/10.5539/mas.v8n5p63
Abstract
This study uses citation analysis from two citation tracking databases, Google Scholar (GS) and ISI Web of
Science, in order to test the correlation between them and examine the effect of the number of paper versions on
citations. The data were retrieved from the Essential Science Indicators and Google Scholar for 101 highly cited
papers from Malaysia in the field of engineering. An equation for estimating the citation in ISI based on Google
scholar is offered. The results show a significant and positive relationship between both citation in Google
Scholar and ISI Web of Science with the number of versions. This relationship is higher between versions and
ISI citations (r = 0.395, p<0.01) than between versions and Google Scholar citations (r = 0.315, p<0.01). Free
access to data provided by Google Scholar and the correlation to get ISI citation which is costly, allow more
transparency in tenure reviews, funding agency and other science policy, to count citations and analyze scholars’
performance more precisely.
Keywords: bibliometrics, citation analysis, evaluations, equivalence, Google Scholar, High cited, ISI Web of
Science, research tools, H-index
1. Introduction
Citation index as a type of Bibliometrics method traces the references in a published article. It shows that how
many times an article has been cited by other articles (Fooladi et al., 2013). Citations are applied to evaluate the
academic performance and the importance of information contained in an article (Zhang, 2009). This feature
helps researchers get a preliminary idea of the articles and research that make an impact in a field of interest. The
avenues to evaluate citation tracking have greatly increased in the past years (Kear & Colbert-Lewis, 2011).
Citation analysis was monopolized for decades by the system developed by Eugene Garfield at the Institute for
Scientific Information (ISI) now owned by Thomson Reuter Scientific (Bensman, 2011). ISI Web of Science is a
publication and citation database which covers all domains of science and social science for many years (Aghaei
Chadegani et al., 2013). In 2004, two competitors emerged, Scopus and Google Scholar (Bakkalbasi, Bauer,
Glover, & Wang, 2006). Google Inc. released the beta version of ‘Google Scholar’ (GS)
(http://scholar.google.com) in November 2004 (Pauly & Stergiou, 2005). These three tools, ISI from Thomson
Reuters, Google Scholar (GS) from Google Inc. and Scopus from Elsevier are used by academics to track their
citation rates. Access to ISI Web of Science is subscription-based service while GS provides a free alternative to
retrieve the citation counts. Therefore, the researchers need to estimate their citation in ISI by knowing the GS
63
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
citation counts. On the other hand, publishing a research paper in a scholarly journal is necessary but not
sufficient for receiving citations in the future (Nader Ale Ebrahim, 2013). The paper should be visible to the
relevant users and authors in order to get citations. The visibility of the paper is defined by the number of paper
versions which are available in the Google Scholar database. The number of citations will be limited to the
versions of the published article on the web. The literature has shown increased visibility by making research
outputs available through open access repositories, wider access results and higher citation impact (Nader Ale
Ebrahim et al., 2014; Amancio, Oliveira Jr, & da Fontoura Costa, 2012; Antelman, 2004; Ertürk & Şengül, 2012;
Hardy, Oppenheim, Brody, & Hitchcock, 2005). A paper has greater chance of becoming highly cited whenever
it has more visibility (Nader Ale Ebrahim et al., 2013; Egghe, Guns, & Rousseau, 2013).
The objectives of this paper are two-fold. The first objective is to find the correlation between Google Scholar
and ISI citation in the highly cited papers. The second objective is to find a relationship between the paper
availability and the number of citations.
2. Google Scholar & Web of Science Citations
The citation facility of Google Scholar is a potential new tool for Bibliometrics (Kousha & Thelwall, 2007).
Google Scholar, is a free-of-charge by the giant Google search engine, has been suggested as an alternative or
complementary resource to the commercial citation databases like Web of Knowledge (ISI/Thomson) or Scopus
(Elsevier) (Aguillo, 2011). Google Scholar provides Bibliometrics information on a wide range of scholarly
journals, and other published material, such as peer-reviewed papers, theses, books, abstracts and articles, from
academic publishers, professional societies, preprint repositories, universities and other scholarly organizations
(Orduña-Malea & Delgado López-Cózar, 2014). GS also introduced two new services in recent years: Google
Scholar Author Citation Tracker in 2011 and Google Scholar Metrics for Publications in April 2012 (Jacso,
2012). Perhaps some of these documents would not otherwise be indexed by search engines such as Google, so
they would be "invisible" to web searchers, and clearly some would be similarly invisible to Web of Science
users, since it is dominated by academic journals (Kousha & Thelwall, 2007). On the other hand, the Thomson
Reuters/Institute for Scientific Information databases (ISI) or Web of Science database (actually there is
ambiguity between different names of former ISI), include three databases: Science Citation Index/Science
Citation Index Expanded (SCI/SCIE) (SCIE is the online version of SCI), Social Science Citation Index (SSC)
and Arts and Humanities Citation Index (AHCI) (Larsen & von Ins, 2010). Since 1964 the Science Citation
Index (SCI) has been a leading tool in indexing (Garfield, 1972).
Few studies have been done to find a correlation between GS with WoS citations. Cabezas-Clavijo and
Delgado-Lopez-Cozar (2013) found that the average h-index values in Google Scholar are almost 30% higher
than those obtained in ISI Web of Science, and about 15% higher than those collected by Scopus. GS citation
data differed greatly from the findings using citations from the fee-based databases such as ISI Web of Science
(Bornmann et al., 2009). Google Scholar overestimates the number of citable articles (in comparison with formal
citation services such as Scopus and Thomson Reuters) because of the automated way it collects data, including
‘grey’ literature such as theses (Hooper, 2012). The first objective of this study is to find the correlation between
Google Scholar and ISI citation in the highly cited papers.
3. Visibility and Citation Impact
Nader Ale Ebrahim et al. (2014) based on a case study confirmed that the article visibility will greatly improve
the citation impact. The journal visibility has an important influence on the journal citation impact (Yue &
Wilson, 2004). Therefore, greater visibility caused higher citation impact (Zheng et al., 2012). In contrast, lack of
visibility has condensed a significant citation impact (Rotich & Musakali, 2013). Nader Ale Ebrahim et al. (2013)
by reviewing the relevant papers extracts 33 different ways for increasing the citations possibilities. The results
show that the article visibility has tended to receive more download and citations. In order to improve the
visibility of scholars’ works and make them relevant on the academic scene, electronic publishing will be
advisable. This provides the potential to readers to search and locate the articles at minimum time within one
journal or across multiple journals. This includes publishing articles in journals that are reputable and listed in
various databases and peer reviewed (Rotich & Musakali, 2013). Free online availability substantially increases
a paper’s impact (Lawrence, 2001a). Lawrence (2001a, 2001b) demonstrated a correlation between the
likelihood of online availability of the full-text article and the total number of citations. He further showed that
the relative citation counts for articles available online are on average 336% higher than those for articles not
found online (Craig, Plume, McVeigh, Pringle, & Amin, 2007).
However, there are limited resources to explain the relationship between the paper availability and the number of
citations (Craig et al., 2007; Lawrence, 2001b; McCabe & Snyder, 2013; Solomon, Laakso, & Björk, 2013).
64
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
None of them discussed about the relationship between the number of versions, and citation. The number of
“versions” will be shown in any Google Scholar search result. Figure 1 shows 34 different versions of an article
entitled “Virtual Teams: a Literature Review (Nader Ale Ebrahim, Ahmed, & Taha, 2009)” and number of
citations. The second objective of this research is to find a relationship between the paper availability and the
number of citations.
Figure 1. The number of “versions” in the Google Scholar search result
4. Methodology
Highly cited papers from Malaysia in the field of engineering were retrieved from the Essential Science
Indicators (ESI) which is one the Web of Knowledge (WoK) databases. ESI provides access to a comprehensive
compilation of scientists’ performance statistics and science trend data derived from WoK Thomson Reuters
databases. Total citation counts and cites per paper are indicators of influence and impact of each paper. There is
a threshold to select highly cited papers according to the baseline data in ESI. This threshold is different from
one discipline to another one. ESI rankings are determined for the most cited authors, institutions, countries, and
journals (The Thomson Corporation, 2013). The paper must be published within the last 10-year plus four-month
period (January 1, 2003-April 30, 2013) and must be cited above threshold level, in order to be selected.
Essential Science Indicators data used in this research have been updated as of July 1, 2013.
Google Scholar which is a free online database was used for deriving the number of citations and versions of the
ESI highly cited papers. The data from ESI was collected on 29 July 2013 and Google Scholar data was
collected on 31 July 2013. The total numbers of 101 papers were listed in ESI as highly cited papers from
Malaysia in the field of engineering. The lists of 101 papers were retrieved from ESI database and then were
exported to an Excel sheet. A search engine was developed to get the number of citations and versions from
Google Scholar. This gadget assisted the present researchers to collect the data more preciously and faster than
searching for the papers one by one. The Statistical Package for the Social Sciences (SPSS) was used for
analyzing the data. The results are illustrated in the following section.
5. Results and Discussion
The number of citations which were derived from Web of Knowledge platform hereafter are called ISI citation.
To study the relationship among the number of citations in Google scholar and ISI and the number of versions,
correlation coefficients were computed.
Table 1 shows descriptive statistics of the variables.
Table 1. Descriptive statistic of variables
N
Minimum
Maximum
Mean
Std. Deviation
Versions
101
2
28
5.62
3.078
Cited in Google Scholar
101
4
348
80.76
71.718
ISI citation
101
5
189
43.15
36.076
As both numbers of citations in Google scholar and ISI were distributed normally, Pearson correlation coefficient
(r) was used and the results showed a very high positive and significant association (r = 0.932 , P<0.01) between
the number of citations in Google scholar and ISI for the articles that were published during 2006 to 2012 from
Malaysia in the field of engineering. To study the relationship between both citation and the number of versions,
65
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
Spearman Rho was used due to the non-normal distribution of the versions. The results showed a significant and
positive relationship between both citations in Google Scholar and ISI with the number of versions. This
relationship was higher between versions and ISI citations (r = 0.395, p<0.01) than between versions and Google
Scholar citations (r = 0.315, p<0.01). Linear regression was also applied to predict the number of citations in ISI
based on Google Scholar citations. The results showed a very high predictability (R2 = 0.836) for the linear
model (see
Figure 2) which was significant (F = 511.63, p<0.01). Therefore, the final equation for estimating the citation in
ISI based on Google Scholar is:
ISI Citation = 5.961 + 0.460 (Google Scholar citation)
Figure 2. Scatter diagram between ISI citation and Google Scholar citation
To study the effect of the number of versions on both citations in Google Scholar and ISI, simple linear
regression was applied. The results indicated that the number of versions had a significant positive effect on
citations in both databases (see Table 2 and Table 3).
Table 2. Summary of regression analysis results
R Square
0.276
0.272
Model a
Model b
F
39.12**
38.316**
β
0.532
0.528
t
6.255
6.19
p
<0.01
<0.01
Predictors: Versions
a: Dependent Variable: Cited in Google Scholar, b: Dependent Variable: ISI citation
Table 3. Descriptive statistics of variables - Year
Year
N
Versions
Cited in Google Scholar
ISI citation
Mean
SD
Mean
SD
Mean
SD
Before 2009
20
7.75
5.25
152.85
103.741
79.8
46.6
2009
26
6.08
1.695
101.19
38.948
56.96
20.577
2010
18
5.11
2.193
70.17
50.097
41.44
26.86
66
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
2011
16
4.31
1.352
49.25
33.66
21.31
12.015
2012
21
4.48
2.089
19.9
9.518
9.24
3.315
A comparison between Google Scholar and ISI citation for highly cited papers from Malaysia in the field of
engineering (see Figure 3) shows that the citation counts in Google Scholar are always higher than the number of
citations in ISI.
Figure 3. Comparison between Google Scholar and ISI citations
6. Conclusion
The number of publications and the number of citations in ISI Web of Science are used to measure the
researchers’ scientific performance and their research impact. However, these numbers are not freely available.
Therefore, the offered equation can be used as a reference to convert the number of Google Scholar citations to
ISI citations. On the other hand, the number of versions of both systems has a significant positive effect on the
number of citations. This finding supports other researchers’ (Amancio et al., 2012; Antelman, 2004; Egghe et al.,
2013; Ertürk & Şengül, 2012; Hardy et al., 2005) findings related to the paper visibility. The results of this study
indicate that there is a strong correlation between the number of citations in Google Scholar and ISI Web of
Science. Therefore, the researchers can increase the impact of their research by increasing the visibility of their
research papers (or paper versions). Future study is needed to determine the relationship between citation counts
on the other databases such as Microsoft Academic Research, Scopus, SiteSeer index and ISI by considering
journal article and conference papers.
Reference
Aghaei Chadegani, A., Salehi, H., Yunus, M. M., Farhadi, H., Fooladi, M., Farhadi, M., & Ale Ebrahim, N.
(2013). A Comparison between Two Main Academic Literature Collections: Web of Science and Scopus
Databases. Asian Social Science, 9(5), 18-26. http://dx.doi.org/10.5539/ass.v9n5p18
Aguillo, I. F. (2011). Is Google Scholar useful for Bibliometrics? A Webometric Analysis. In E. Noyons, P.
Ngulube, & J. Leta (Eds.), Proceedings of Issi 2011: The 13th Conference of the International Society for
Scientometrics and Informetrics, Vols 1 and 2 (pp. 19-25). Leuven: Int Soc Scientometrics &
Informetrics-Issi.
Ale Ebrahim, N. (2013). Introduction to the Research Tools Mind Map. Research World, 10(4), 1-3.
http://dx.doi.org/10.5281/zenodo.7712
Ale Ebrahim, N., Ahmed, S., & Taha, Z. (2009). Virtual Teams: a Literature Review. Australian Journal of Basic
and Applied Sciences, 3(3), 2653-2669. http://dx.doi.org/10.6084/m9.figshare.1067906
Ale Ebrahim, N., Salehi, H., Embi, M. A., Habibi Tanha, F., Gholizadeh, H., & Motahar, S. M. (2014). Visibility
67
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
and Citation Impact. International Education Studies, 7(4), 120-125. http://dx.doi.org/10.5539/ies.v7n4p120
Ale Ebrahim, N., Salehi, H., Embi, M. A., Habibi Tanha, F., Gholizadeh, H., Motahar, S. M., & Ordi, A. (2013).
Effective Strategies for Increasing Citation Frequency. International Education Studies, 6(11), 93-99.
http://dx.doi.org/10.5539/ies.v6n11p93
Amancio, D. R., Oliveira Jr, O. N., & da Fontoura Costa, L. (2012). Three-feature model to reproduce the
topology of citation networks and the effects from authors’ visibility on their h-index. Journal of
Informetrics, 6(3), 427-434. http://dx.doi.org/10.1016/j.joi.2012.02.005
Antelman, K. (2004). Do open-access articles have a greater research impact? College & Research Libraries,
65(5), 372-382. http://dx.doi.org/10.5860/crl.65.5.372
Bakkalbasi, N., Bauer, K., Glover, J., & Wang, L. (2006). Three options for citation tracking: Google Scholar,
Scopus and Web of Science. Biomedical Digital Libraries, 3(1), 7. http://dx.doi.org/10.1186/1742-5581-3-7
Bensman, S. (2011). Anne-Wil Harzing: The publish or perish book: Your guide to effective and responsible
citation analysis. Scientometrics, 88(1), 339-342. http://dx.doi.org/10.1007/s11192-011-0388-8
Bornmann, L., Marx, W., Schier, H., Rahm, E., Thor, A., & Daniel, H. D. (2009). Convergent validity of
bibliometric Google Scholar data in the field of chemistry-Citation counts for papers that were accepted by
Angewandte Chemie International Edition or rejected but published elsewhere, using Google Scholar,
Science Citation Index, Scopus, and Chemical Abstracts. Journal of Informetrics, 3(1), 27-35.
http://dx.doi.org/10.1016/j.joi.2008.11.r001
Cabezas-Clavijo, A., & Delgado-Lopez-Cozar, E. (2013). Google Scholar and the h-index in biomedicine: The
popularization
of
bibliometric
assessment.
Medicina
Intensiva,
37(5),
343-354.
http://dx.doi.org/10.1016/j.medin.2013.01.008
Craig, I. D., Plume, A. M., McVeigh, M. E., Pringle, J., & Amin, M. (2007). Do open access articles have greater
citation impact?: A critical review of the literature. Journal of Informetrics, 1(3), 239-248.
http://dx.doi.org/10.1016/j.joi.2007.04.001
Egghe, L., Guns, R., & Rousseau, R. (2013). Measuring co-authors' contribution to an article's visibility.
Scientometrics, 95(1), 55-67. http://dx.doi.org/10.1007/s11192-012-0832-4
Ertürk, K., & Şengül, G. (2012). Self Archiving in Atılım University. In S. Kurbanoğlu, U. Al, P. Erdoğan, Y.
Tonta, & N. Uçak (Eds.), E-Science and Information Management (Vol. 317, pp. 79-86): Springer Berlin
Heidelberg. http://dx.doi.org/10.1007/978-3-642-33299-9_11
Fooladi, M., Salehi, H., Yunus, M. M., Farhadi, M., Aghaei Chadegani, A., Farhadi, H., & Ale Ebrahim, N.
(2013). Do Criticisms Overcome the Praises of Journal Impact Factor? Asian Social Science, 9(5), 176-182.
http://dx.doi.org/10.5539/ass.v9n5p176
Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178, 471-479.
http://dx.doi.org/10.1126/science.178.4060.471
Hardy, R., Oppenheim, C., Brody, T., & Hitchcock, S. (2005). Open Access Citation Information.
Hooper, S. L. (2012). Citations:
http://dx.doi.org/10.1038/483036c
not
all
measures
are
equal.
Nature,
483(7387),
36-36.
Jacso, P. (2012). Google Scholar Metrics for Publications The software and content features of a new open
access
bibliometric
service.
Online
Information
Review,
36(4),
604-619.
http://dx.doi.org/10.1108/14684521211254121
Kear, R., & Colbert-Lewis, D. (2011). Citation searching and bibliometric measures: Resources for ranking and
tracking. College & Research Libraries News, 72(8), 470-474.
Kousha, K., & Thelwall, M. (2007). Google Scholar citations and Google Web-URL citations: A multi-discipline
exploratory analysis. Journal of the American Society for Information Science and Technology, 58(7),
1055-1065. http://dx.doi.org/10.1002/asi.v58:7
Larsen, P. O., & von Ins, M. (2010). The rate of growth in scientific publication and the decline in coverage
provided
by
Science
Citation
Index.
Scientometrics,
84(3),
575-603.
http://dx.doi.org/10.1007/s11192-010-0202-z
Lawrence, S. (2001a). Free online availability substantially increases a paper's impact. Nature, 411(6837),
521-521. http://dx.doi.org/10.1038/35079151
68
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 5; 2014
Lawrence, S. (2001b). Online or invisible. Nature, 411(6837), 521. http://dx.doi.org/10.1038/35079151
McCabe, M. J., & Snyder, C. M. (2013). Does Online Availability Increase Citations? Theory and Evidence from
a Panel of Economics and Business Journals: SSRN working paper.
Orduña-Malea, E., & Delgado López-Cózar, E. (2014). Google Scholar Metrics evolution: an analysis according
to languages. Scientometrics, 98(3), 2353-2367. http://dx.doi.org/10.1007/s11192-013-1164-8
Pauly, D., & Stergiou, K. I. (2005). Equivalence of results from two citation analyses: Thomson ISI’s Citation
Index and Google’s Scholar service. Ethics in Science and Environmental Politics, 5, 33-35.
Rotich, D. C., & Musakali, J. J. (2013). Publish or Perish: Remaining Academically Relevant and Visible In the
Global Academic Scene through Scholarly Publishing. Paper presented at the Conference and Programme
Chairs, South Africa.
Solomon, D. J., Laakso, M., & Björk, B.-C. (2013). A longitudinal comparison of citation rates and growth
among
open
access
journals.
Journal
of
Informetrics,
7(3),
642-650.
http://dx.doi.org/10.1007/s11192-013-1164-8
The Thomson Corporation. (2013). Essential Science Indicators, Product Overview. Retrieved from
http://esi.webofknowledge.com/help//h_whatis.htm
Yue, W. P., & Wilson, C. S. (2004). Measuring the citation impact of research journals in clinical neurology: A
structural
equation
modelling
analysis.
Scientometrics,
60(3),
317-332.
http://dx.doi.org/10.1023/B:SCIE.0000034377.93437.18
Zhang, C.-T. (2009). The e-Index, Complementing the h-Index for Excess Citations. PLoS ONE, 4(5), e5429.
http://dx.doi.org/10.1371/journal.pone.0005429
Zheng, J., Zhao, Z. Y., Zhang, X., Chen, D. Z., Huang, M. H., Lei, X. P., … Zhao, Y. H. (2012). International
scientific and technological collaboration of China from 2004 to 2008: a perspective from paper and patent
analysis. Scientometrics 91(1), 65-80. http://dx.doi.org/10.1007/s11192-011-0529-0
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution
license (http://creativecommons.org/licenses/by/3.0/).
69
International Education Studies; Vol. 7, No. 7; 2014
ISSN 1913-9020 E-ISSN 1913-9039
Published by Canadian Center of Science and Education
Ethical and Unethical Methods of Plagiarism Prevention in Academic
Writing
Kaveh Bakhtiyari1,2, Hadi Salehi3, Mohamed Amin Embi4, Masoud Shakiba2,5, Azam Zavvari2,5, Masoomeh
Shahbazi-Moghadam6, Nader Ale Ebrahim7 & Marjan Mohammadjafari8
1
Department of Computer & Cognitive Science, Faculty of Engineering, University of Duisburg-Essen,
Duisburg, Germany
2
Department of Electrical, Electronics and Systems Engineering, Faculty of Engineering and Built Environment,
Universiti Kebangsaan Malaysia, Bangi, Malaysia
3
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
4
Faculty of Education, Universiti Kebangsaan Malaysia (UKM), Bangi, Malaysia
5
Industrial Computing Group, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia,
Bangi, Selangor, Malaysia
6
Perdana School of Science, Technology and Innovation Policy, Universiti Teknologi Malaysia, Malaysia
7
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring (IPPP),
University of Malaya, Malaysia
8
Department of Industrial Engineering, Faculty of Engineering, Science and Research Branch, Islamic Azad
University, Kerman, Iran
Correspondence: Kaveh Bakhtiyari, Department of Electrical, Electronics and Systems Engineering, Faculty of
Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor Darul Ehsan,
Malaysia. E-mail: academic@bakhtiyari.com
Received: February 10, 2014
doi:10.5539/ies.v7n7p52
Accepted: May 13, 2014
Online Published: June 19, 2014
URL: http://dx.doi.org/10.5539/ies.v7n7p52
Abstract
This paper discusses plagiarism origins, and the ethical solutions to prevent it. It also reviews some unethical
approaches, which may be used to decrease the plagiarism rate in academic writings. We propose eight ethical
techniques to avoid unconscious and accidental plagiarism in manuscripts without using online systems such as
Turnitin and/or iThenticate for cross checking and plagiarism detection. The efficiency of the proposed
techniques is evaluated on five different texts using students individually. After application of the techniques on
the texts, they were checked by Turnitin to produce the plagiarism and similarity report. At the end, the
“effective factor” of each method has been compared with each other; and the best result went to a hybrid
combination of all techniques to avoid plagiarism. The hybrid of ethical methods decreased the plagiarism rate
reported by Turnitin from nearly 100% to the average of 8.4% on 5 manuscripts.
Keywords: plagiarism, plagiarism prevention, academic cheating, ethical approaches, unethical approaches
1. Introduction
Academic publishing is a key factor for any person in academia. All research students, lecturers, professors and
researchers in general tend to publish their works and the results of their studies in academic conference
proceedings, books, journals or magazines (Hockenos, 2013). Even though getting a paper published is very
important, the originality and ethic of the context of the published paper is very important as well. The most
common problem in academic manuscripts is “Plagiarism” (Carrol, 2002; Hawley, 1984; Patel, Bakhtiyari, &
Taghavi, 2011).
Plagiarism has been defined by different resources. But one of the best definitions of plagiarism is represented
on Dictionary.com web site:
“an act or instance of using or closely imitating the language and thoughts of another author without
authorization and the representation of that author's work as one's own, as by not crediting the original
52
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
author.” (Dictionary.com, 2013)
In Etymonline, it is also defined as “kidnapper, seducer, plunderer” which is originality since 1597. In American
Heritage, it is called as “Literary theft”, and in Encyclopedia Britannica is known as “fraudulence, forgery,
piracy-practices” (Encyclopedia-Britannica, 2013). Therefore, plagiarism is the act of copying someone else text,
idea or language and publish it as it is his/her own work. It is very clear from the above definitions that
plagiarism is against the law, and the authors should avoid plagiarism in their own manuscripts.
Nowadays, plagiarism is growing along with increase of publications and academic papers. One of the most
important criteria to evaluate the academics is their publications. Therefore, it is very important to help them to
increase their publication quality, and make them aware of plagiarism consequences. This paper gives a specific
attention to the authors, who plagiarize unconsciously without being aware of the plagiarism. Therefore, the
present study is to propose some ethical techniques which can prevent or at least decrease the plagiarism. Beside
the ethical techniques, we also present the unethical methods of masking plagiarism, which are employed by the
authors who consciously plagiarize. However, in order to decrease the rate of plagiarism, we do not recommend
using unethical methods which are not accepted in academia. But it is worth mentioning them to drag librarian
and academic lecturers’ attention. To evaluate the efficiency of proposed ethical techniques, they are applied in
five different texts using 16 students. The results had shown promising efficiency in plagiarism.
2. Plagiarism: Literature & Causes
At the beginning, we discuss the causes of plagiarism, and analyze how an author might commit plagiarism.
There are two types of authors who have plagiarism in their manuscripts. The first type is the direct plagiarizing
that the author would plagiarize by duplicating another author’s text exactly as it is, and pretending as his/her
own work. It is commonly known as copy-pasting in word processing applications. The second type is about
authors who plagiarize unconsciously. There are many factors which may cause these problems. Some of these
factors are as follows:
1) Uncited Ideas & Concepts: Scholars are used to read many articles to improve other methodologies in order
to deliver a contribution in science. Using other researcher’s idea is considered as plagiarism unless his/her
research article would be cited properly in the manuscript. Even though authors cite the other scholars’
research, they should avoid using the same words and language structures of the original text.
2) Many Authors: In many manuscripts, there might be many authors involved. However, not every single
author is aware whether the others have been honest and ethical in their writings or not. In a collaborative
work, each author is responsible for what he/she writes, but when it gets published all the authors are
partially responsible for the whole published material. Usually all authors trust each other in publications.
However, always the authors may not be honest. There is a famous quotation which says: “Sometimes too
much trust kills you”.
3) Accidental Similarity: The next factor is when the text of the author has been closely similar to another
author’s by chance. When an idea or a sentence is as usual as everybody uses in their own text, it might be
possible that a sentence or a concept would be recognized as plagiarism in plagiarism detection systems.
4) Fixed Definitions: At almost all sciences, there are fixed definitions which authors write and cite them, and
modification would make them a change or twist the meaning.
5) Cross-Text Plagiarism (Text Recycling): A piece of writing consists of several parts such as Introduction,
Body and Conclusion. These parts are dependent on the manuscript goals, applications and styles. However,
they might have more sections including Abstract, Evaluation, etc. Sometimes, authors need to repeat some
concepts in various parts of the text. For example, in the abstract they have to talk about what they are
presenting and discussing in the whole paper. Meanwhile, at the same time, each part of the abstract has
been discussed in details in the body of the manuscript. Another example, in methodologies, evaluations, or
even conclusion; the authors would definitely address the problems and issues which have been discussed
and elaborated in the other sections (e.g. Introduction) (IEEE, 2011). These texts with the same meaning
occurring in various places of the text might cause language and word similarity inside the manuscript. To
elaborate, a sentence in introduction with the definition of the problem statement might be duplicated
somewhere else in the paper (e.g. in Conclusion) with the exact language and sentence structure
(iThenticate.com, 2011). This problem is a very common issue with plagiarism. The ability of writing a
sentence with a same meaning in various styles and structures is an art, and it needs practice and authority in
both language and science. Co-authoring and having the manuscript proofread help significantly to
overcome this problem.
53
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
6) Self-Plagiarism (Similarity): Researchers might publish more than one article from their own research. Even
though they are different, they have many similarities in different sections of the articles. These papers may
fall into plagiarism by the following three cases:
a.
Redundancy and Duplication: This is also known as dual publication. It refers to the publication of a
single research article in more than one journal. Schein has done a research that 93 out of 660 studies
(more than 14%) from 3 surgical major journals were in case of dual publication (Schein & Paladugu,
2001).
Dual publication is not always considered as plagiarism. In two cases, dual publication is accepted and
ethical.
b.
i)
Summaries and abstracts which have been presented and published in a conference
proceedings can get published in a journal with some extension on the original paper. This
extension usually should be minimum 30%.
ii)
A published article can be republished in another language. But in the second publication, the
existence of the original (first article) should be mentioned and addressed.
Salami Slicing: Sometimes when the research has different aspects, the authors generate various articles
on different sections of the research such as literature review and methodology. They may also split up
the results (data fragmentation) in order to generate different research papers. In all these papers, the
basics such as the Introduction and Problem Statements are very similar to each other. Therefore,
writing the same concepts in different words is usually difficult. Some authors only copy the text from
their own previous published papers.
Besides data fragmentation, there is another issue called data augmentation. Data augmentation refers to
the publication of the studies which have been already published, and subsequently, the authors would
collect new data to improve the result and the contribution of the research. Having both results mixed
together and get it published might easily mislead the readers (malpractice).
c.
Copyright Infringement: The act of copying a text from the author’s previous published manuscripts is
another case of plagiarism. Although the authors have written the original text, it was published under
the copyright of a publisher. According to the copyright rules and regulations, making a copy of the
published text in whole section or a part is considered as plagiarism (or self-plagiarism).
7) Metadata: At the time of producing a document, the word processors (such as Microsoft Office Word) make
some metadata inside the document, which are hidden from the eyes and print. The metadata consist of
many useful information, and sometimes they cause headaches to the author (Hockenos, 2013; Mehta,
Meyer, & Murgai, 2013). For example, in a Microsoft Word document file, the generated metadata contains
the amount of times in minutes which took to complete the file, the last 10 authors of the file (last 10
computer names which edited the file), the company which has registered Microsoft Office product, and in
some cases with the track changes turned on, the deleted texts could also be found and extracted. Dennis
Rader was prosecuted because of murder, torture and bind. He was found and caught by police because his
word processor left his company information in his sent document’s metadata (Anderson, 2013).
The plagiarism as the problem, and its causes have been discussed. However the main challenge is to provide a
solution on “How can we avoid the plagiarism?”
3. Plagiarism Prevention Methods and Techniques
In the previous section, the driving factors of plagiarism in a manuscript were discussed. Here, few solutions are
presented to avoid unwanted plagiarism. As it has been discussed earlier, there are two groups of authors with
plagiarism. The first group of authors who are aware of plagiarism and what they do. For this group the only
solution is providing them sufficient education on consequences of ethical and unethical writings. The second
group includes the authors who plagiarize unconsciously without any intention to do so. Accordingly, there are
two major methods to cover up plagiarism by these two groups of authors: 1) Unethical methods, 2) Ethical
methods.
Unethical methods provide the solutions to decreasing the plagiarism by bypassing the plagiarism detection
software. In fact, the plagiarism has not been solved by these methods, but the software would report very low or
even 0% of plagiarism and similarity. Ethical methods talk about how to change the text properly in order to
decrease and to avoid the real amount of the plagiarism. The methods discussed in this paper are considered as
the authors have followed the ethical behavior in their research and have cited all references correctly and
54
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
properly in their manuscripts. In the follow, the solutions, methods and techniques are discussed in both of
unethical and ethical approaches.
3.1 Unethical Methods
Unethical methods refer to the techniques in order to decrease the plagiarism rate without any effort to change
the original text. These techniques aim to bypass plagiarism detection algorithms and to get a reasonable result of
plagiarism on a highly plagiarized context (Gillam, Marinuzzi, & Ioannou, 2010). However, they are not decent
and not recommended by any librarian. In this section, these methods are discussed and elaborated to inform
authors, lecturers, librarians and students to avoid and to be mindful of these methods. Also to make sure that
these techniques have not been applied to the produced manuscripts to be published.
Besides the unethical methods, there is a new concept in publishing and writing industry called “Ghost writing”.
Ghost writing means the action of getting other people to write on behalf of someone else. Although it is legal
and common among politicians and famous people, it is prohibited in the academic world. There are ghost
writers who write articles, books, academic assignments, or even university theses on behalf of a researcher for
money. Even though using ghost writing service seems legal, easy and common in many areas, but it is
considered as an unethical behavior in academia.
3.1.1 Mojibake
Mojibake is a Japanese word means transformation, and here it literally means character transformation. Using
Mojibake is only available in multi-layer documents such as PDF. A PDF document has a minimum of two
layers. The first layer (top-layer) is visual, and it is visible to the reader and printer. This layer is similar to an
image, and its text cannot be rendered. The second layer (bottom-layer) is a textual layer, and it contains the text
of the visual layer. When a text is selected, copied or searched; the textual layer is used and it is shown on the
visual layer (ISO, 2008).
In PDF documents, it is possible to keep the visual layer, which is image based, and change the textual layer to
the non-meaningful expressions. Figure 1 shows a sample of a PDF document which its textual layer is changed
to Mojibake (Gillam et al., 2010).
Figure 1. A sample of a text and its Mojibake in a PDF document
55
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
By having such PDF document submitted to plagiarism detection software, the system reads the textual layer and
checks its plagiarism rate which would be equal to 0%. This method is useful for the copyright protection in
order to prevent PDF documents’ texts from being copied and reused in other documents.
3.1.2 Cyrillic Letters
Coincidentally, some of the letters in Latin and Cyrillic look the same, but they are actually different. There are
three letters of “a”, “o” and “e” which they exist in both Latin and Cyrillic and they all look pretty much the
same in both alphabetical systems. These letters have also different values in ASCII and UNICODE for
computer systems. It means that they would be recognized as different characters in computer applications.
As an unethical action, the letters of “a”, “o” and “e” can be replaced with their similar Cyrillic character in a
whole text and those letters with the replaced characters will be invisible and unsearchable for plagiarism
detection software (Gillam et al., 2010).
3.1.3 Space Replacement
In a text, every word is separated from the adjacent words with a simple space. In fact, spaces define the words.
If there is no space in a text, it means that there is only one word even with hundreds of characters. This method
tries to eliminate the spaces of a text for plagiarism detection software, but on the other hand, make it readable to
the humans. If the spaces would be replaced with dots or some Chinese characters and then recolored to white,
they will not be visible in rich-text area. But in the plagiarism detection software, it would be considered as only
one word as it only processes the plain text and eliminates the rich-text’s styles. In the following, there is the
main text and its space-replaced version (Patel et al., 2011).
Original Text:
“Before Unicode, it was necessary to match text encoding with a font using the same encoding system. Failure
to do this produced unreadable gibberish whose specific appearance varied depending on the exact
combination of text encoding and font encoding.”
Text with spaces replaced with a Chinese character:
Before 画 Unicode,画 it 画 was 画 necessary 画 to 画 match 画 text 画 encoding 画 with 画 a 画 font 画 using
画 the 画 same 画 encoding 画 system.画 Failure 画 to 画 do 画 this 画 produced 画 unreadable 画 gibberish
画 whose 画 specific 画 appearance 画 varied 画 depending 画 on 画 the 画 exact 画 combination 画 of 画 text
画 encoding 画 and 画 font 画 encoding.
The second text with 39 words is considered as a single word for anti-plagiarism software with the same number
of characters as there is no space in the whole text.
3.1.4 Automatic Translation
System Translation (SysTran) is a translation of a text from one language into another by using computer
systems (Google, 2011; SysTranSoft.com, 2011). These systems have been improving every day and more users
are using such systems. Even though these systems cannot translate the text accurately as a human does, they are
very fast and their results are reliable. Using SysTran for an academic manuscript is a double-sided blade, which
has ethical and unethical uses.
There are lots of research studies available in other languages, and they have not been translated into English.
Using SysTran to translate the research and publish as their own work is considered as an unethical approach.
Because what it has been claimed is not the author’s original work.
3.1.5 Synonyms Replacement
Most of word processors are equipped with a dictionary or a thesaurus to suggest synonyms and antonyms for
the words. This feature is very useful for the author to replace the similar words in a paragraph with their
synonyms. Although this feature helps, it can be used to reduce plagiarism from a copied text. If some words in a
copied text would be replaced with their synonyms, the plagiarism detection systems usually fail to detect them.
However, the final text has been changed, the original structure of the context is still the same, and it is
considered as an unethical method to reduce the plagiarism.
56
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
3.1.6 Text Image
All of the available plagiarism detection systems only process the text of a manuscript, and they simply ignore
the images. If the authors put the text in an image, they would be skipped because the system cannot recognize
the text in the images of a manuscript (Patel et al., 2011). This detection is not only difficult for the system, but
also having a text image can be quite tough for humans to be recognized manually. For example, a text image in
a Word document is pretty clear to be identified, but in a PDF document, it is not easy to be found.
3.2 Ethical Methods
Ethical methods focus on the text words and structures to change them properly and to decrease the similarity
percentage as much as possible. These techniques should be used and applied with consideration of citing the
references correctly and properly in the manuscripts. These methods do not rely on the weaknesses of the
plagiarism detection algorithms.
3.2.1 Reading from Many Sources
When a researcher intends to read about a topic, usually there are loads of references to be read. Researchers
might get the point by reading only one reference, and reading the other similar articles might look time wasting
in some points.
There is a general psychological aspect of reading only one single reference. When a person reads a text, he/she
would respond by using the same structure when he/she is questioned. For example, imagine a person who has
watched a movie or read a book. Now he is retelling it to the others. He might use the same or very close
dialogues of the actors unconsciously to describe what has happened in the movie. Although they do not want to
tell the exact words and they have not memorized them; they repeat what they have heard (or read).
But when a person reads the same concept in many resources, his brain would make an imaginative conceptual
structure in his mind. While he decides to write, his brain discusses what exists as a structural model of the
concept in his brain and not the words and language structures. Reading from many resources will cause the
reader to have the concept of context fixed in his mind. However, reading a single text for many times would
have the words of context fixed in mind, and not the concept. People usually do this (reading a single text for
many times) to memorize a text.
3.2.2 Writing after a Few Days of Reading
It is quite common to write exactly what you have just read. Writing instantly after reading has the benefit of
being accurate and not missing any important part, but it also increases the chance of facing with plagiarism in
the text. According to the forgetting curve of human memory by Hermann Ebbinghaus, what it has been read
will be forgotten a day after by 20% (Ebbinghaus, 1913).
Figure 2. Forgetting curve of human brain by Hermann Ebbinghaus
57
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
Figure 2 shows the forgetting curve that after a day, 20% of what we have learnt is lost. If we do not review it in
3 days, we have almost forgotten 40%, and in almost a week, it is completely gone. But if after a day, we review
it for a few minutes, we can get it back to 100% again. This time the slope of the forgetting curve is less than the
previous time. Therefore, after 2 days of the review, we might lose 20% which was possible previously in only 1
day. This chart shows that after 4 times of periodic revision, it would be fixed in our minds.
By using this curve, it can be concluded that it is preferable to write a few days after you have read a reference.
In this way, the exact words and language structure will not be in your mind anymore, and your brain will make
up the new sentences automatically.
3.2.3 Using Thesaurus/Dictionary
A plagiarized sentence is an expression with the duplicated words. So, replacing the words with synonyms and
antonyms can help the authors to decrease the chance and amount of similarity in the context. Using dictionaries
and thesauruses are good tools for this purpose. Beside the paper-based dictionaries, almost most of the word
processing software is featured with an integrated dictionary to provide word replacements. Using these tools not
only decrease the plagiarism rate, but also it beautifies and makes fluent texts by replacing the repeated words in
a paragraph.
This technique is not ethical by itself, because replacing the words do not change the sentence structures. But
having the other approaches to avoid plagiarism next to this method is ethical.
3.2.4 As Many Proofread as Possible
Many authors would just have their own work submitted for publication just after being finished with writing. Of
course, they might be very good in writing or confident in the science, but having the paper proofread as many as
possible would also make some minor changes and improvements in the language structure of the paper. These
changes would definitely eliminate the plagiarism and similarity rate.
There are three issues that the authors should be aware of them. Firstly, they primarily should get people with the
same research discipline to proofread the text. Therefore, they only would modify the context as the meaning
remains the same. Asking various people with various research areas can just make amphibologies and opacities
in the context.
The second issue is that spreading the paper to many people can increase the risk of research piracy. Some of the
colleagues may publish the manuscript before the main author publishes the paper. Therefore, the authors should
find the people who are trustworthy and reliable.
The third issue is to remember to review the paper every time after the proof readings. Even if the authors trust
the knowledge of colleagues who have proofread the paper, still they should check to see everything has been
done correctly. It is very possible to make mistakes or twist the meanings when they edit a manuscript.
3.2.5 Using Machine Translators
Some people find it very tricky to write a sentence with a same meaning in several structures (paraphrase);
sometimes they cannot find any other solution rather than copying from the original location. Finding people to
paraphrase them is not always possible, or it is time consuming. Using machine translators is a very easy but not
a fast solution.
Computer translators use Natural Language Processing (NLP) techniques. NLP is a research area and it is not
going to be discussed in this paper. But here we get to know how it works in multi lingual translators. They
choose a language (usually English) as a main language, and they translate every other language from and into
this language. For example, if a user would like to translate a text from French into German, the system firstly
translates French into English, and then English into German. The reason of this action is based on the graph
mathematics. Making the translation rules of each language into another one in a multi-lingual translator will
make a mesh network of relations of languages and rules, which makes a really huge set of rules. A multi-lingual
translator with the support of 10 languages would only need 9 sets of rules, which are between 9 languages and
the one as the main language. But making a mesh network of sets for 10 languages is equal to 45 sets of rules as
following Equation 1.
(1)
According to a research in 2011 (Patel et al., 2011), the result of translated text from a western language into
another western, back to the source language will not make many changes. So, it is suggested to translate it
58
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
across the eastern and western languages and at the end back to the source language. For example, consider that
there is a text in English. It is suggested to translate it into an eastern language (e.g. Chinese), and then from the
eastern language (e.g. Chinese) into a western language (e.g. French), then repeating this procedure by
translating from western into eastern languages, and finally back into the original language which is English in
this example. The resulted text would be quite different with the original text.
Despite being an easy to use method, there are a few cautions that we should be aware of them. Firstly, the
generated language might be grammatically incorrect and it should be edited. Sometimes editing a sentence is
easier than paraphrasing the whole sentence or paragraph. Secondly, editing incorrect sentences takes a long time,
and it is not as easy and fast as it looks like, but it helps a lot.
3.2.6 Periodic Self-Reading
Once the paper is written, the authors should not leave or submit it. If they read what they have just written, they
can only find the grammatical mistakes or if a word is missing in the text. Further failures will not be recovered
in that stage. But if the authors read the text a few days later, they can cover more failures such as dim sections,
or even having more new sentences written in order to extend the paper. Self-reading the paper periodically can
fix the mistakes, improve and increase the quality of the writing. In spite of the benefits of having a periodic
self-reading, over-reading a paper can cause obsessions.
3.2.7 Quoting
There are some cases that the authors find themselves inevitable to write the exact words as the original text. For
example, it is a famous saying, a scientific abbreviation or a known expression. In these cases, using quotation is
a solution. Quoting someone else words is always ignored and not being considered as plagiarism. Quoting can
be from a few words to hundreds. This technique is simply putting the text into two quotation marks and
referencing the source at the end of the expression. For example, “A person who never made a mistake never
tried anything new”—Albert Einstein.
Besides the ethical solution of quoting, opening a quote in the beginning of the manuscript, and closing it at the
end is an unethical and ineligible approach.
3.2.8 Realistic & Imaginative Context
A solution (methodology) plays one of the most important roles for the whole journey of the research. The right
solution drives the author to the destination, and the wrong one can lead him/her to nowhere. Imagination is the
first place where the new ideas are born.
A researcher tries to use his imaginations and to lead them to be realistic, and respect them by writing, feeding,
raising, improving, and at the end publishing them.
Although an imagination might be funny and stupid at the first sight, it has the potential of opening a new door
toward the problem solving. Writing of imagination is usually and basically plagiarism-free, because they are
purely the authors’ properties.
4. Efficiency Evaluation of Plagiarism Prevention Techniques
In this section, the above ethical techniques are evaluated and the results are compared to check how effective
they are in decrement of plagiarism and similarity at academic writings. A group of 16 individual students from
the National University of Malaysia (Bangi, Malaysia) and Erican Language Centre (Kajang, Malaysia) has been
selected. Five texts consist of 400-500 words were chosen from Wikipedia to have a primary 95% - 100%
similarity index in Turnitin software. In the first stage, each of those above ethical techniques was taught to two
students, and they were requested to apply that specific technique on all those 5 texts from Wikipedia. Also the
students were allowed to edit the text as long as the meaning is the same. At the end of the first stage, each of
those ethical techniques was applied to all 5 texts by two students. And 80 texts were produced from a nearly 100%
plagiarized texts (copied from Wikipedia). These 80 texts were checked for plagiarism by Turnitin system.
59
Plagiarism
www.ccsenet.org/ies
International Education Studies
100
90
80
70
60
50
40
30
20
10
0
Vol. 7, No. 7; 2014
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Text 1 40
51
20
25
65
59
75
81
34
43
70
75
85
89
45
51
Text 2 37
39
25
30
70
64
75
85
30
35
71
76
88
86
48
48
Text 3 47
43
29
32
75
63
79
87
28
41
65
70
87
81
41
51
Text 4 51
49
35
28
72
61
83
79
25
42
72
70
82
85
56
53
Text 5 42
55
21
29
68
62
78
80
35
45
75
73
80
79
42
50
Student No.
Text 1
Text 2
Text 3
Text 4
Text 5
Figure 3. Similarity index of each text by each student
Plagiarism
Each 2 students are trained with an ethical method. Students 1 & 2 are trained with method 1 (3.2.1), students 3
& 4 with method 2 (3.2.2) and respectively, until the last 2 students of 15 & 16 with method 8 (3.2.8). Figure 3
shows 80 values in percentage resulted by 16 students on 5 different texts. It shows that methods number 2
(Writing after a Few Days of Reading), 5 (Using Machine Translators) and 8 (Realistic and Imaginative Context)
are the top 3 methods to avoid plagiarism in a text. To clarify the above result, each of the 2 values of each
method in a paper is averaged, and it is illustrated in Figure 4.
100
90
80
70
60
50
40
30
20
10
0
1
2
3
4
5
6
7
8
Text 1
45.5
22.5
62
78
38.5
72.5
87
48
Text 2
38
27.5
67
80
32.5
73.5
87
48
Text 3
45
30.5
69
83
34.5
67.5
84
46
Text 4
50
31.5
66.5
81
33.5
71
83.5
54.5
Text 5
48.5
25
65
79
40
74
79.5
46
Methods
Text 1
Text 2
Text 3
Text 4
Text 5
Figure 4. Average of each method on every single text
Figure 5 illustrates the average of the similarity index after applying each method on all 5 texts. Lower values
show less similarity and plagiarism rate of each method on the texts.
60
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
100
90
80
6, 71.7
3, 65.9
70
Plagiarism
7, 84.2
4, 80.2
60
8, 48.5
1, 45.4
50
5, 35.8
40
2, 27.4
30
20
10
0
1
2
3
4
5
6
7
8
Method #
Figure 5. Average of similarity index of each method on all texts
In Figures 4 and 5, the effectiveness of each method is illustrated. As it can be shown, the best method to avoid
plagiarism went to method 2 (Writing after a Few Days of Reading). We also decided to measure how these
techniques might affect on the texts by applying all together.
Then, for the final stage, all 16 students were requested to apply their learnt methods together on the texts again.
This time, they were applying their methods collaboratively at the same time on 5 texts. And at the end, we had 5
texts with 8 methods applied on by 16 students. These 5 papers were evaluated to check the similarity index.
Table 1 shows the plagiarism rate of each text after applying all 8 methods. The similarity indices of these texts
were reduced from nearly 100% to the average of 8.4%, which is a brilliant similarity index by using of 8
methods and different students.
Table 1. Plagiarism rates of all 5 texts by applying 8 methods
Text #
Plagiarism
1
7%
2
5%
3
9%
4
13%
5
8%
Average
8.4%
5. Conclusion
This paper started with the explanations of plagiarism issues and literary piracy in academic manuscripts. It also
presented few unethical methods to warn the authors, students and librarians about them. Then it proposed
ethical methods to avoid the plagiarism and to decrease the similarity rate in the academic writings. These ethical
methods were all evaluated on 5 manuscripts by 16 students and checked by Turnitin. The results show that all
those methods can reduce the similarity index significantly. The most effective method was the second approach,
“Writing after a Few Days of Reading”. The hybrid of all these methods can make an almost plagiarism-free
context down to the average of 8.4% of similarity index. This research can be used by researchers to improve the
quality and originality of their academic writings.
References
Anderson,
J.
(2013).
Breaking
the
BTK
Killer's
61
Cipher.
Cryptologia,
37(3),
204-209.
www.ccsenet.org/ies
International Education Studies
Vol. 7, No. 7; 2014
http://dx.doi.org/10.1080/01611194.2013.797047
Carrol, J. (2002). A handbook for detecting plagiarism in higher education. London: Oxford: The Oxford Centre
for Staff and Learning Development, Oxford Brookes University.
Dictionary.com.
(2013).
Plagiarism.
Retrieved
http://dictionary.reference.com/browse/Plagiarism
January
8,
2013,
from
Ebbinghaus, H. (1913). Memory: A contribution to experimental psychology. Teachers college, Columbia
University.
Encyclopedia-Britannica.
(2013).
Plagiarism.
http://dictionary.reference.com/browse/Plagiarism
Retrieved
January
8,
2013,
from
Gillam, L., Marinuzzi, J., & Ioannou, P. (2010). TurnItOff: Defeating plagiarism detection systems. Paper
presented at the 11th Higher Education Academy-ICS Annual Conference, University of Durham, UK.
Google. (2011). Google Translator. Retrieved April 20, 2011, from http://translate.google.com/
Hawley, C. S. (1984). The thieves of academe: Plagiarism in the university system. Improving college and
university teaching, 32(1), 35-39. http://dx.doi.org/10.1080/00193089.1984.10533838.
Hockenos, P. (2013). High-Profile Plagiarism Prompts Soul-Searching in German Universities. Chronicle of
Higher Education.
IEEE.
(2011).
Cross
Check
User
Guide.
Retrieved
http://www.ieee.org/documents/CrossCheck_User_Guide.pdf
April
20,
2011,
from
ISO. (2008). Document management–Portable document format Part 1: PDF 1.7 (Vol. ISO 32000-1:2008):
International Organiztion for Standardization.
iThenticate.com.
(2011,
08/04/2011).
CrossCheck.
http://www.crossref.org/crosscheck/index.html
Retrieved
April
20,
2011,
from
Mehta, S., Meyer, D., & Murgai, P. (2013). Identifying and ranking networked biographies and referral paths
corresponding to selected qualifications: US Patent 20,130,018,871.
Patel, A., Bakhtiyari, K., & Taghavi, M. (2011). Evaluation of Cheating Detection Methods in Academic
Writings. Library Hi Tech, 29(4), 623-640. http://dx.doi.org/10.1108/07378831111189732.
Schein, M., & Paladugu, R. (2001). Redundant surgical publications: Tip of the iceberg? Surgery, 129(6),
655-661. http://dx.doi.org/10.1067/msy.2001.114549.
SysTranSoft.com. (2011). SYSTRAN–Online Translation Software. Retrieved April 20, 2011, from
http://www.systransoft.com/
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution
license (http://creativecommons.org/licenses/by/3.0/).
62
Cite as: N. Ale Ebrahim, “How to Promote Your Article,” University of Malaya Research Bulletin,
vol. 1, no. 1, 23 June, 2014. doi: 10.6084/m9.figshare.1083502
How to Promote Your Article
Nader Ale Ebrahim
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring
(IPPP), University of Malaya, Kuala Lumpur, Malaysia
Abstract:
Writing an article for online distribution in a way that maximized the chances of citation hits, is
different from preparing one for print journals in some small, but important, respects. To be
cited, articles have to be visible in an electronic environment. Therefore, publishing a high
quality paper in scientific journals will be a halfway of receiving citation in the future. The rest
of the way is advertising and disseminating the publications by using the proper “Research
Tools”. Familiarity with the tools allows the researcher to increase his/her h-index in the short
time.
Keywords: University ranking, Improve citation, Citation frequency, Research impact, Open
access, h-index, Increase citations, Research tools
Inroduction:
This article provides a list of simple yet effective ways to promote your publications [1-3].
1.
2.
3.
4.
5.
6.
Use a unique name consistently throughout academic careers.
Use a standardized institutional affiliation and address, using no abbreviations
Repeat key phrases in the abstract while writing naturally.
Assign keyword terms to the manuscript
Optimize your article for Google Scholar and other academic search engines
Make a unique phrase that reflects author's research interest and use it throughout
academic life
7. The article metadata should be correct (especially author and title).
8. Publish in journal with high impact factor
9. Self-archive articles
10. Keep your professional web pages and published lists up to date (Make an attractive
homepage that is clear about what you are working on.)
11. Make your research easy to find, especially for online searchers
12. Publish in an Open Access (OA) journal
13. Deposit paper in Open Access repository
14. Publish with international authors
15. Write article collaboratively (Team-authored articles get cited more)
Cite as: N. Ale Ebrahim, “How to Promote Your Article,” University of Malaya Research Bulletin,
vol. 1, no. 1, 23 June, 2014. doi: 10.6084/m9.figshare.1083502
16. Use more references
17. Cite relevant thought influencers in your article (To be the best, cite the best).
18. Publish a longer paper
19. Publish papers with a Nobel laureates
20. Contribute to Wikipedia
21. Start blogging and share your blog post with target customers
22. Share your article on all your social media platforms
23. Interact with your peer connections through academic social media.
24. Keep track of all your international contacts
25. Follow the conversation on academic social media
26. When your paper is finally published, individually email the pdf to selected collaborators
27. Respond to comments.
28. Write a review paper
29. Avoid to select a question type of title
30. Sharing detailed research data
31. Publish across disciplines
32. Present a working paper
33. Publish your article in one of the journals everyone in your discipline reads
34. Add your latest published article’s link to your email signature
35. Take 50 photocopies of your best 1-2 papers to conferences, and leave them on the
brochure desk as a handout.
36. Increase number of publications in peer-reviewed journals
37. After a conference takes the paper, correct it, extend it, and submit it to a journal.
38. Publish your work in a journal with the highest number of abstracting and indexing
39. Create a podcast describing the research project
40. Make an online CV Like ORCID or ResearcherID.
41. Publish tutorials papers
42. Follow-up the papers which cited your article
43. Use all
“Enhancing Visibility and Impact” tools which are available on
http://www.mindmeister.com/39583892/research-tools-by-nader-ale-ebrahim.
Cite as: N. Ale Ebrahim, “How to Promote Your Article,” University of Malaya Research Bulletin,
vol. 1, no. 1, 23 June, 2014. doi: 10.6084/m9.figshare.1083502
Source: http://contentmarketinginstitute.com/2011/03/blog-post-to-dos/
References
[1]
N. Ale Ebrahim, H. Salehi, M. A. Embi, F. Habibi Tanha, H. Gholizadeh, S. M. Motahar,
and A. Ordi, “Effective Strategies for Increasing Citation Frequency,” International
Education Studies, vol. 6, no. 11, pp. 93-99, October 23, 2013.
[2]
N. Ale Ebrahim, H. Salehi, M. A. Embi, F. Habibi Tanha, H. Gholizadeh, and S. M.
Motahar, “Visibility and Citation Impact,” International Education Studies, vol. 7, no. 4,
pp. 120-125, March 30, 2014.
[3]
N. Ale Ebrahim, “Introduction to the Research Tools Mind Map,” Research World, vol.
10, no. 4, pp. 1-3, June 14, 2013.
Journal of Theoretical and Applied Information Technology
10th November 2013. Vol. 57 No.1
© 2005 - 2013 JATIT & LLS. All rights reserved.
ISSN: 1992-8645
www.jatit.org
E-ISSN: 1817-3195
CONTRIBUTION OF INFORMATION AND
COMMUNICATION TECHNOLOGY (ICT) IN COUNTRY’S HINDEX
1
MARYAM FARHADI1*, HADI SALEHI2,3, MOHAMED AMIN EMBI2, MASOOD FOOLADI1,
HADI FARHADI4, AREZOO AGHAEI CHADEGANI1, and NADER ALE EBRAHIM5
1
Department of Accounting, Mobarakeh Branch, Islamic Azad University, Mobarakeh, Isfahan, Iran
2
Faculty of Education, Universiti Kebangsaan Malaysia (UKM), Bangi, 43600, Malaysia
3
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan,
Iran
4
School of Psychology and Human Development, Faculty of Social Sciences and Humanities, Universiti
Kebangsaan Malaysia (UKM), Malaysia
5
Department of Engineering Design and Manufacture, Faculty of Engineering, University of Malaya, Kuala
Lumpur, Malaysia
*
Corresponding author: Maryam Farhadi, Email: farhadim58@gmail.com
ABSTRACT
The aim of this study is to examine the effect of Information and Communication Technology (ICT)
development on country’s scientific ranking as measured by H-index. Moreover, this study applies ICT
development sub-indices including ICT Use, ICT Access and ICT skill to find the distinct effect of these
sub-indices on country’s H-index. To this purpose, required data for the panel of 14 Middle East countries
over the period 1995 to 2009 is collected. Findings of the current study show that ICT development
increases the H-index of the sample countries. The results also indicate that ICT Use and ICT Skill subindices positively contribute to higher H-index but the effect of ICT access on country’s H-index is not
clear.
Keywords: Information and Communication Technology (ICT) development, H-index, Middle East
However, ICT revolution and use of internet
caused many structural changes [3]. As an
example, the United States productivity has been
revived during the late 1990s and the early 2000s
because of the ICT development [4]. There are
many studies introducing ICT as an important
element to facilitate economic growth in both
developing and developed countries [5, 6]. In
addition, ICT has the potential to increase health
systems through the designing new methods to
prevent and detect disease [7]. Other studies
confirm the dominant role of ICT in improving
and modernizing the learning methods and
educational systems [8, 9]. The fast progress in
technology in the last two decades has reduced
the cost of ICTs. In this situation, it is easier for
more people to use and enjoy the advantages of
ICTs [10].
1. INTRODUCTION
In the last decades, technology has improved
the life style and made easier the human
activities through a new civilization. A new
revolution
named
Information
and
Communication Technology (ICT) has occurred
in the present world [1]. ICT is defined as a
concept including information equipment as well
as computers and software, auxiliary equipment
that is connected to a computer (such as
calculators, cash registers, photocopiers),
communications equipment, and instruments [2].
In other words, ICT is a technology with the aim
of
connecting
computers
and
other
communication equipment to gather, produce,
process, classify, manage, create, and distribute
information. After the invention of machines in
1970s, industrial revolution has been started and
machines have been used instead of handworks.
122
Journal of Theoretical and Applied Information Technology
10th November 2013. Vol. 57 No.1
© 2005 - 2013 JATIT & LLS. All rights reserved.
ISSN: 1992-8645
www.jatit.org
Although, the key role of ICT is approved in
different areas such as engineering [11],
education [12], health care [13], and economy
[6], its effect on the degree of scientific ranking
of a country is yet unexplored. Therefore, the
aim of this study is to explore the impact of ICT
development as measured by ICT Development
Index (IDI) on H-index as a proxy of scientific
progress of a country.
E-ISSN: 1817-3195
East countries, this study applies the regression
techniques with countries’ H-index as dependent
variable and IDI as independent variable as
follows:
H it =
α 0 + α1IDIit + ε it
(1)
where α 's are parameters to be estimated, H
represents H-index, IDI stands for ICT
development index and subscripts i and t show
the ith country in tth year, respectively. ε is the
error term which is assumed to be independent of
IDI. Then, for more inspections, instead of IDI,
which captures the composite effect of ICT
development, the current study uses three ICT
sub-indices including ICT Access, ICT Use and
ICT Skill as explanatory variables. In this
approach, we can test the distinct impact of each
sub-index
on
the
country’s
H-index.
Consequently, in this step of the analysis,
following equation (Equation 2) is used:
In addition, in order to shed more lights on
this area, present study applies three components
of ICT development namely, ICT Skill, ICT Use
and ICT Access to examine the distinct impact of
these components on H-index as a proxy of
scientific progress of a country. To this purpose,
this study applies a multiple regression analysis
on a panel of 14 Middle East countries over the
period 1995 to 2009.
The current study suggests the use of H-index
provided by SCImago database [14] to measure
the scientific ranking of country. Based on
Hirsch [15], a scientist or a group of scientists
has H-index of h when h of their papers has at
least h citations. In other words, H-index is the
number of articles with citation number higher or
equal to h [15]. The H-index, which has been
invented by Hirsch in 2005, proposes a simple
and apparently strong measure to compare the
scientific productivity and visibility of countries
and institutions [16].
H it =
µ0 + µ1Accessit + µ2 Useit + µ3Skillit + δ it
(2)
where µ 's are parameters to be estimated, H
shows H-index and Access, Use and Skill stand
for ICT development sub-indices. Moreover,
subscripts i and t indicate the ith country in tth
year, respectively, and δ is the error term which
is assumed to be independent of Access, Use and
Skill variables.
H-index can measure the quantity and quality
of papers simultaneously since the total number
of published articles and the number of citations
received by that articles are considered in the
calculation of H-index. In addition, there are
many studies, which reveal the benefits of Hindex as an appropriate and fair tool for
evaluating and ranking scientists, laboratories,
scientific journals, universities, institutions and
countries [17, 18, 19, 20, 21].
2.2. Data and Measurement
2.2.1.
ICT development
The IDI, as an indicator of ICT development
is computed based on Principal Component
Analysis (PCA), which combines various
indicators in order to make single value. For the
first time, IDI is introduced in 2009 by
Measuring the Information Society [23] and
considers the degree of progress in ICTs in more
than 150 countries. Composition of this indicator
includes ICT Access showing infrastructure and
access, ICT Skill referring to ICT skill or
capability and ICT Use displaying the extent of
using ICT. Required ICT data to compute the
IDI is extracted from United Nations Educational
Scientific and Cultural Organization (UNESCO)
and ITU. Nowadays, ICT is considered as an
important factor in development of countries,
which are moving towards knowledge or
Hirsch [15] proposes that H-index can be
used to appraise not only the scientists but also
countries and institutions. Hence, following
Jacso [22], current study also uses H-index as a
proxy to measure scientific ranking of countries.
2. METHODOLOGY AND DATA
2.1. Empirical Model
In order to examine the effect of ICT
development on scientific degree of 14 Middle
123
Journal of Theoretical and Applied Information Technology
10th November 2013. Vol. 57 No.1
© 2005 - 2013 JATIT & LLS. All rights reserved.
ISSN: 1992-8645
www.jatit.org
information-based societies. It is assumed that
these countries have experienced the process of
ICT development to turn into an information
society. Therefore, the ITU [23] suggests the
three-phase model including ICT readiness
(showing the degree of networked infrastructure
and ICT access), ICT intensity (showing the
degree of ICT use by the society) and ICT
impact (showing the result of effective and
efficient ICT use). IDI is a composite indicator
including ICT access, ICT skill and ICT use.
Each sub-index is composed of a number of
indicators explained as follow:
E-ISSN: 1817-3195
Figure 1. Three Stages In The Evolution Towards An
Information Society (Source: ITU [25])
1. Access Sub-Index shows the availability of
ICT infrastructure and individuals’ access to
basic ICTs. This sub-index has five indicators
including fixed telephone lines per 100
inhabitants, mobile cellular telephone
subscriptions
per
100
inhabitants,
international Internet bandwidth (bit/s) per
Internet user, proportion of households with a
computer and proportion of households with
Internet access at home.
2.2.2.
h-index
Different sources report the H-index. Some of
these sources are subscription-based sources
such as Scopus and Web of Science (WoS) and
some of them are free access sources such as
Harzing's Publish or Perish program on the basis
of Google Scholar entries. It should be noted that
these sources report different H-index for the
same institution, country and scholar because of
their different coverage. For example, WoS
covers a high range of published journals while it
does not cover high impact conferences.
Although, the coverage of publications in Scopus
is poor prior to 1996, it covers conferences
properly. Documents in Google Scholar receive
more citations in comparison with those in
Scopus and WoS. It is argued that Google
Scholar has the most coverage of journals and
conferences particularly those published after
1990 [26].
2. Use Sub-Index shows the real degree of ICTs
strength of use based on the available
infrastructure
and
access
indicators.
Considering the limitation on data for ICT
Use at the global scale and results of the
PCA, this subgroup has three indicators
including Internet users per 100 inhabitants,
fixed broadband Internet subscribers per 100
inhabitants
and
mobile
broadband
subscriptions per 100 inhabitants.
3. Skill Sub-Index captures the degree of ICT
skill in different countries. Since the required
data for many developing countries are not
collected, an appropriate indicator can be the
degree of literacy and education. Especially
in developing countries, the poor level of
education is a major obstacle to the effective
use of internet and computers. With the
inclusion of ICT in school courses, school
attendance may offer an appropriate indicator
for students’ exposure to the Internet or
computers. Therefore, this subgroup has three
indicators including, adult literacy rate,
secondary gross enrolment ratio and tertiary
gross enrolment ratio. The required data for
this subgroup are extracted from the
UNESCO Institute for Statistics (UIS) [24].
The current study collects the required data
on H-index for the panel of 14 Middle East
countries over the period 1995 to 2009, from
SCImago database [14]. The SCImago Journal
and Country Rank website presents the journal
and country scientific indicators based on the
information included in the Scopus database.
3. FINDING AND DISCUSSION
In this section, we first estimate Equation 1,
which explores the impact of ICT development
on H-index as a proxy of country’s scientific
ranking. Then, applying the three ICT subindices, this study presents the estimation results
for Equation 2, which is corresponding to the
effect of ICT development sub-indices on Hindex.
124
Journal of Theoretical and Applied Information Technology
10th November 2013. Vol. 57 No.1
© 2005 - 2013 JATIT & LLS. All rights reserved.
ISSN: 1992-8645
www.jatit.org
E-ISSN: 1817-3195
Table 1. Regression Results For The Impact Of ICT Development On H-Index
Dependent Variable: H-index
Variables
Coefficient
Standard Error
IDI
31.69
4.67***
Constant
24.16
13.35*
Observation = 210
F(1, 208) = 46.07***
R-squared = 0.1813
Adjusted R-squared = 0.1774
Note:
***, ** and * denote statistically significant at 1%, 5% and 10%, respectively.
Standard errors are heteroskedasticity consistent.
sample countries. The estimated coefficients
show a positive and significant effect of ICT Use
and ICT Skill on country’s scientific ranking.
This finding asserts that increasing the
application of ICTs and improving the ICT skills
might significantly increase the scientific degree
of a country. The results also indicate that if a
country increases the ICT Use and ICT Skill
indices by one unit, its H-index will raise by 133
and 407 units, respectively. As can be seen in
Table 2, the coefficient explaining the effect of
ICT Access on H-index is negative but not
significant. Finally, based on the adjusted Rsquared, ICT sub indices can explain 27.11% of
variations in country’s H-index.
Table 1 shows the effect of ICT development
on each country’s H-index. The estimation
results indicate a positive and significant effect
of ICT development on country’s scientific
ranking as proxied by H-index. This finding
suggests that suitable policies aim at improving
the level of ICT development can significantly
increase the scientific rankings of a country. The
results also assert that if a country raises its ICT
development index by one unit, the H-index will
increase by 31 units. As can be seen in Table 1,
the adjusted R-squared is equal to 0.1774, which
means that ICT development can explain 17.74%
of variations in country’s H-index.
Table 2 exhibits the impact of ICT
development sub-indices on H-index of 14
Table 2. Regression Results For The Impact Of ICT Development Sub-Indices On H-Index
Dependent Variable: H-index
Variables
T
6.79
1.81
Coefficient
Standard Error
t
ICT Access
-25.46
49.80
-0.51
ICT Use
133.23
64.64**
2.06
ICT Skill
407.28
64.86***
6.28
Constant
-151.10
35.82***
-4.22
Observation = 210
F(3, 206) = 26.91***
R-squared = 0.2816
Adjusted R-squared = 0.2711
Note:
***, ** and * denote statistically significant at 1%, 5% and 10%, respectively.
Standard errors are heteroskedasticity consistent.
125
Journal of Theoretical and Applied Information Technology
10th November 2013. Vol. 57 No.1
© 2005 - 2013 JATIT & LLS. All rights reserved.
ISSN: 1992-8645
www.jatit.org
E-ISSN: 1817-3195
[7] Haridasan, K., Rangarajan, S. and Pirio, G.
“Re-Inventing Health Care Training In The
Developing World: The Case For Satellite
Applications In Rural Environments”, Online
Journal of Space Communication, 14(Winter),
2009.
[8] Sharma, A., Gandhar, K., Sharma, S. and
Seema, S. “Role of ICT in the Process of
Teaching and Learning”, Journal of Education
and Practice, Vol. 2, No. 5, 2011, pp. 1-6.
[9] Sangrà, A. and González-Sanmamed, M. “The
role of information and communication
technologies in improving teaching and
learning processes in primary and secondary
schools”, Journal of Asynchronous Learning
Networks, Vol. 15, No. 4, 2011, pp. 47-59.
[10]
International Telecommunication Union,
“Measuring the information society: The ICT
Development Index”, Place des Nations CH1211 Geneva Switzerland, 2011.
[11]
C.C. Aldea, A.D. Popescu, A. Draghici
and G. Draghici, “ICT Tools Functionalities
Analysis for the Decision Making Process of
Their Implementation in Virtual Engineering
Teams”, Procedia Technology, Vol. 5, 2012,
pp. 649-658.
[12]
F. Scheuermann and F. Pedró,
“Assessing the effects of ICT in education:
Indicators, criteria and benchmarks for
international comparisons”, Luxembourg:
Publications Office of the European Union,
2009.
[13]
A.R. Bakker, “Health care and ICT,
partnership is a must”, International Journal of
Medical Informatics, Vol. 66, No. 1-3, 2002,
pp. 51-57.
[14] SCImago, SJR — SCImago Journal &
Country Rank, 2007, Retrieved March 11,
2013, from http://www.scimagojr.com
[15] J.E. Hirsch, “An index to quantify an
individual’s scientific research output”,
Proceedings of the National Academy of
Science, Vol. 102, No. 46, 2005, pp. 1656916572.
[16] J.F. Molinari and A. Molinari, “A new
methodology
for
ranking
scientific
institutions”, Scientometrics, Vol. 75, No. 1,
2008, pp. 163-174.
[17] P. Ball, “Index aims for fair ranking of
scientists”, Nature, Vol. 436, No. 900, 2005.
doi:10.1038/436900a.
[18] G. Saad, “Exploring the h-index at the
author and journal levels using bibliometric
data of productive consumer scholars and
business-related
journals
respectively”,
4. CONCLUSION AND IMPLICATIONS
This paper focuses on examining the effect of
ICT development on the scientific degree of 14
Middle East countries as proxied by H-index. The
results of the regression analysis show that ICT
development has a positive and significant effect
on the H-index of these countries. Moreover,
applying the ICT development sub-indices, this
study finds that ICT Use and ICT Skill might have
positive effects on H-index but the effect of ICT
Access is not clear since its corresponding
coefficient is not statistically significant.
Therefore, we can conclude that ICT induces
outcomes that leads to higher H-index values and
raises the scientific level of sample countries. In
this situation, policy makers should aim
increasing the level of ICT development through
increasing its indicators including fixed and
mobile telephone lines, international Internet
bandwidth, proportion of households with a
computer and Internet access, Internet users, fixed
and mobile broadband Internet subscribers, adult
literacy rate, secondary and tertiary gross
enrolment ratios.
REFERENCES:
[1] R.J. Gordon, “Does the ‘New Economy’
measure up to the great inventions of the
past?”, Journal of Economic Perspectives,
Vol. 14, No. 4, 2000, pp. 49-74.
[2] S.D. Oliner and D.E. Sichel, “Computers and
output growth revisited: How big is the
puzzle?”, Brookings Papers on Economic
Activity, Vol. 2, 1994, pp. 273-334.
[3] S.S. Cohen, J.B. Delong and J. Zysman,
“Tools for thought: What is new and important
about the E-economy?”, BRIE Working Paper,
No. 138, 2000.
[4] D.W. Jorgenson and K.J. Stiroh, “Information
technology and growth”, American Economic
review Papers and Proceedings, Vol. 89, No.
2, 1999, pp. 109-115.
[5] Gruber, H. and Koutroumpis, P. “Mobile
telecommunications and the impact on
economic development”, Economic Policy,
Vol. 26, No. 67, 2011, pp. 387-426.
[6] Farhadi, M., Ismail, R. and Fooladi, M.
“Information and Communication Technology
Use and Economic Growth”, PLoS ONE, Vol.
7,
No.
11,
2012,
e48903.
doi:10.1371/journal.pone.0048903
126
Journal of Theoretical and Applied Information Technology
10th November 2013. Vol. 57 No.1
© 2005 - 2013 JATIT & LLS. All rights reserved.
ISSN: 1992-8645
www.jatit.org
Scientometrics, Vol. 69, No. 1, 2006, pp. 117120.
[19] W. Glanzel, “On the h-index - A
mathematical approach to a new measure of
publication activity and citation impact”,
Scientometrics, Vol. 67, No. 2, 2006, pp. 315321.
[20] T. Braun, W. Glanzel and A. Schubert, “A
Hirsch-type index for journals Scientometrics,
Vol. 69, No. 1, 2006, pp. 169-173.
[21] K.M. Mclntyre, L. Hawkes, A. WaretSzkuta, S. Morand and M. Baylis, “The HIndex as a Quantitative Indicator of the
Relative Impact of Human Diseases”, PLoS
ONE, Vol. 6, No. 5, 2011. e19558.
doi:10.1371/journal.pone.0019558.
[22] P. Jacso, “The h-index for countries in Web
of Science and Scopus”, Online Information
Review, Vol. 33, No. 4, 2009, pp. 831-837.
[23] International Telecommunication Union,
“Measuring the information society: The ICT
Development Index”, Place des Nations CH1211 Geneva Switzerland, 2009.
[24] UNESCO Institute for Statistics (UIS),
2011. http://www.uis.unesco.org.
[25] International Telecommunication Union,
“Measuring the information society: The ICT
Development Index”, Place des Nations CH1211 Geneva Switzerland, 2010.
[26] L.I. Meho and K. Yang, “Impact of data
sources on citation counts and rankings of LIS
faculty: Web of science versus scopus and
google scholar”, Journal of the American
Society for Information Science and
Technology, Vol. 58, No. 13, 2007, pp. 2105–
2125.
127
E-ISSN: 1817-3195
From the SelectedWorks of Nader Ale Ebrahim
December 2014
Optimize Your Article for Search Engine
Contact
Author
Start Your Own
SelectedWorks
Available at: http://works.bepress.com/aleebrahim/93
Notify Me
of New Work
Preprint version of: N. Ale Ebrahim, “Optimize Your Article for Search Engine,” University of Malaya Research
Bulletin, vol. 2, no. 1, 23 December, 2014.
Optimize Your Article for Search Engine
Nader Ale Ebrahim
Research Support Unit, Centre of Research Services, IPPP, University of Malaya
Abstract
This article provides guidelines on how to optimize scholarly literature for academic search
engines like Google Scholar, in order to increase the article visibility and citations.
Introduction
Academic search engine optimization (ASEO) is a debatable topic [1]. Some researchers believe
“Designing scientific articles just for SEO would undermine the credibility and definitively the
quality of the articles and the research presented” [2]. On the other hand, the inclusion of the
articles in the index improves the ability to make their articles available to the academic
community [3]. “If you want your paper be "accepted" by a search engine you have to accept the
search engines style guidelines” [2]. The literature indicates a significant and positive
relationship between both citation in Google Scholar and ISI Web of Science with the number of
versions [4]. Increasing the visibility means rises the number of versions of an article and vice
versa [5]. The number of “versions” will be shown in a Google Scholar search result. Figure 1
shows 33 different versions of an article entitled “Virtual Teams: a Literature Review [6]” and
number of citations.
Figure 1 The number of “versions” in the Google Scholar search result
Discussion
There are several techniques that you can optimize your article for search engines:
1. Usage of strong keywords and synonyms; Once the keywords are chosen, they need to be
mentioned in the right places: in the title, and as often as possible in the abstract and the
body of the text (but, of course, not so often as to annoy readers).[1],
Preprint version of: N. Ale Ebrahim, “Optimize Your Article for Search Engine,” University of Malaya Research
Bulletin, vol. 2, no. 1, 23 December, 2014.
2. The completeness of documents metadata; When documents are converted to PDF, all
metadata should be correct (especially author and title). Some search engines use PDF
metadata to identify the file or to display information about the article on the search
results page. It may also be beneficial to give a meaningful file name to each article [1].
3. Write a good and short title [7];
4. Consistent spelling of authors’ first and last names;
5. Refer to reliable resources with high citations;
6. Select a proper journal; and
7. Deposit your paper in different repository; are some examples [8].
Google Scholar as one of the academic search engine indexes scholarly material from proprietary
sources such as subscription and open access journals, conference proceedings and white papers.
Google Scholar weighted heavily the title, the Journal name and author names also multiple
versions of an article, (e.g. on your home page and in UM Research Repository) [1] (please refer
to http://authorservices.wiley.com/bauthor/seo.asp [7] for more detail).
Beel, Gipp and Wilde [1] believe that academic search engine optimization (ASEO) should be a
common procedure for researchers, similar to, selecting an appropriate journal for publication.
ASEO should not be seen as a guide on how to cheat academic search engines. Rather, it is about
helping academic search engines to understand the content of research papers and, thus, about
how making this content more widely and easily available.
References
[1]
J. Beel, B. Gipp, and E. Wilde, “Academic Search Engine Optimization (ASEO),”
Journal of Scholarly Publishing, vol. 41, no. 2, pp. 176-190, 01/01/, 2010.
[2]
http://blog.webometrics.org.uk. "Academic Search Engine Optimization: An inevitable
evil?," 19 September, 2014; http://blog.webometrics.org.uk/2010/03/academic-searchengine-optimization-an-inevitable-evil/.
[3]
J. Beel, and B. Gipp, “On the robustness of google scholar against spam,” in Proceedings
of the 21st ACM conference on Hypertext and hypermedia (HT’10), Toronto, Ontario,
Canada, 2010, pp. 297-298.
[4]
N. Ale Ebrahim, H. Salehi, M. A. Embi, M. Danaee, M. Mohammadjafari, A. Zavvari,
M. Shakiba, and M. Shahbazi-Moghadam, “Equality of Google Scholar with Web of
Science Citations: Case of Malaysian Engineering Highly Cited Papers,” Modern Applied
Science, vol. 8, no. 5, pp. 63-69, August 6, 2014.
[5]
N. Ale Ebrahim, H. Salehi, M. A. Embi, F. Habibi Tanha, H. Gholizadeh, and S. M.
Motahar, “Visibility and Citation Impact,” International Education Studies, vol. 7, no. 4,
pp. 120-125, March 30, 2014.
Preprint version of: N. Ale Ebrahim, “Optimize Your Article for Search Engine,” University of Malaya Research
Bulletin, vol. 2, no. 1, 23 December, 2014.
[6]
N. Ale Ebrahim, S. Ahmed, and Z. Taha, “Virtual Teams: a Literature Review,”
Australian Journal of Basic and Applied Sciences, vol. 3, no. 3, pp. 2653-2669, 2009.
[7]
"Optimizing
Your
Article
for
Search
http://authorservices.wiley.com/bauthor/seo.asp.
[8]
N. Ale Ebrahim, H. Salehi, M. A. Embi, F. Habibi Tanha, H. Gholizadeh, S. M. Motahar,
and A. Ordi, “Effective Strategies for Increasing Citation Frequency,” International
Education Studies, vol. 6, no. 11, pp. 93-99, October 23, 2013.
Engines,"
23
September;
Modern Applied Science; Vol. 8, No. 2; 2014
ISSN 1913-1844
E-ISSN 1913-1852
Published by Canadian Center of Science and Education
Relationship among Economic Growth, Internet Usage and
Publication Productivity: Comparison among ASEAN and World’s
Best Countries
Hossein Gholizadeh1, Hadi Salehi2, Mohamed Amin Embi3, Mahmoud Danaee4, Seyed Mohammad Motahar5,
Nader Ale Ebrahim6, Farid Habibi Tanha7 & Noor Azuan Abu Osman1
1
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur,
Malaysia
2
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
3
Faculty of Education, Universiti Kebangsaan Malaysia, Bangi, 43600, Malaysia
4
Faculty of Agriculture, Roudehen Branch, Islamic Azad University, Roudehen, Iran
5
Faculty of Information Science and Technology,Universiti Kebangsaan Malaysia, Bangi, 43600, Malaysia
6
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring (IPPP),
University of Malaya, Malaysia
7
Department of Financial Sciences, University of Economic Sciences, Tehran, 1593656311, Iran
Correspondence: Hadi Salehi, Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University,
Najafabad, Isfahan, Iran. Tel: 98-913-104-1419. E-mail: hadisalehi1358@yahoo.com
Received: January 5, 2014
doi:10.5539/mas.v8n2p160
Accepted: February 7, 2014
Online Published: March 14, 2014
URL: http://dx.doi.org/10.5539/mas.v8n2p160
Abstract
Publication productivity, as measured by the number of papers, has been regarded as one of the main indicators
of reputation of countries and institutions. Nevertheless, the relationship among research publications, economic
growth and World Wide Web in ASEAN countries is still unclear. The main intention of this study was to
identify publication productivity among ASEAN and the world’s top ten countries in the last 16 years
(1996-2011). This study also aimed at finding the relationship among publication, gross domestic product (GDP)
and internet usage. Furthermore, the publication trend in the 10 first Malaysian universities was evaluated for the
same periods. Scopus database was used to find the overall documents, overall citations, citations per document
and international collaboration from 1996 to 2011 for each country. The World Bank database (World Data Bank)
was used to collect the data for GDP and the number of internet users. Moreover, to evaluate 10 top Malaysian
universities, the number of published articles, conferences, reviews, and letters for the same periods was
collected. The results of this study showed significant differences among ASEAN and top 10 countries regarding
publication productivity. Moreover, a positive and significant relationship was observed between indices, GDP
and internet usage for these countries. Surprisingly, international collaboration had a significant and negative
relationship with economic growth. Malaysia had fewer citations per document (7.64) and international
collaboration (36.9%) among ASEAN countries. In conclusion, international collaboration between academic
institutes and researchers is influenced by economic growth and access to internet in the countries. Furthermore,
publication trends in ASEAN countries are promising. However, policy makers and science managers should try
to find different ways to increase the quality of the research publication and to raise citation per document.
Keywords: ASEAN countries, publication productivity, citation, internet user, collaboration, economic growth
1. Introduction
The Association of Southeast Asian Nations, a geo-political and economic organization including ten countries
located in Southeast Asia, was formed on 8 August 1967 by Indonesia, Malaysia, the Philippines, Singapore and
Thailand. Since then, the membership has expanded to include Brunei, Burma (Myanmar), Cambodia, Laos, and
Vietnam. One of the main aims of this association is accelerating economic growth (Sarel, 1997). The main tool
for measuring a country's economy is Gross domestic product (GDP) that is the market value of all officially
recognized final goods and services produced within a country in a given period of time.
160
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
Based on the literature, there is a relationship between economic growth and education and research publication.
However, this relationship has not been well supported by realistic evidence (Jin & Jin, 2013; Nelson & Phelps,
1966; Lucas, 1988; Becker, 2009; Romer, 1990). The Researches by Mankiw et al. (1992) and Barro (1991)
showed positive and significant effects of education on economic growth (Mankive et al., 1992; Barro, 1991).
While the study by Bils and Klenow (2000) showed a possibility of reverse relationship between economic
growth and education (Bils & Klenow, 2000). Moreover, in a recent research study, Jin and Jin (2013) indicated
that publication productivity in different fields has dissimilar effects on economic growth. For instance,
engineering and science have positive influences on economic growth in comparison with publication output in
the field of social sciences (J. Jin & L. Jin, 2013). These days, publication productivity is a main task for
researchers and students than in the past decades and the results of research should benefit the community (Zain
et al., 2009). Based on new developmental principles, the number of knowledgeable employees could be
increased by education (J. Jin & L. Jin, 2013). In addition, educated human resource would help the economy of
the countries to grow faster. Moreover, it is widely observed that the world’s top productive research universities
are in the highly developed countries such as the United States, the United Kingdom and Canada et al.
Publication count is an indicator of research productivity and used to rank countries and universities (Liu &
Cheng, 2005; Meho & Spurgin, 2005; Narin & Hamilton, 1996; Toutkoushian et al., 2003; Yazit & Zainab, 2007).
It can also be used to determine authors’ productivity or the publication productivity of research groups (Liu &
Cheng, 2005; Hart, 2000; Uzun, 2002; Gu & Zainab, 2001; Fox, 1983). The number of citations of previously
published works is an indicator of its subsequent recognition and impact in a field of study. Reviewing articles
that are frequently cited can provide information about the dominant areas of a discipline and also highlight the
growth of particular fields. Furthermore, top-cited articles are often written by recognized experts who can offer
insights into future directions of the discipline (Joynt & Leonard, 1980; Kelly et al., 2010; Lefaivre & O’Brien,
2011; Ponce & Lozano, 2010).
Science Citation Index appeared in 1961 (following that, the Social Sciences Index was formulated in 1966).
Using this index, it has been possible to answer questions such as “who publishes more articles” and “whose
articles are being cited more often?” (Bar-Ilan, 2008; Heeks & Bailur, 2007; Meho & Yang, 2007). Hirsch
developed the H-index (proposed by Jorge E. Hirsch, a physicist at UCSD, in 2005) as an alternative to
traditional bibliometric measures. The H-index is a single bibliometric indicator that combines measures of
productivity and impact into one. Hirsch argued that this combination reflects the impact of a publication on the
corresponding scientific community (Braun et al., 2006; Schubert & Glänzel, 2007). H-index retrieved by
citation indexes (Scopus, Google scholar, and Web of Science) is used to measure the scientific performance and
the research impact studies based on the number of publications and citations of a scientist. It is also easily
available and may be used for performance measures of scientists and for recruitment decisions (Aghaei et al.,
2013; Ale Ebrahim et al., 2013; Brown & Gardner, 1985; Fooladi et al., 2013).
In this study, we used the human development index (HDI) to find the top ten countries around the world. HDI is
a composite statistic used to rank countries according to their development levels from “very high” to “low”.
Countries are placed based on life expectancy, education, living standard, child welfare, health care, economic
welfare, and population’s happiness.
The main purpose of this study was to compare publication productivity among ASEAN and the world’s top ten
countries from 1996 to 2011. In addition, we intended to find the relationship among publication productivity,
gross domestic product (current US$), and internet usage. Finally, we aimed to investigate the publication trend
of the 10 first Malaysian universities in the same periods. In this study, Scopus database
(WWW.SCIMAGOJR.COM) was used since it is a primary full-text scientific database and offers about 20%
more coverage than Web of Science (Anad & Sen, 1994).
We hypothesized publications of academic papers as the research outputs have positive relationship with
economic growth and the number of people with access to the worldwide network. Also, there is a significant
difference between ASEAN publication productivity and the world’s top ten countries.
2. Method
To identify publication productivity among ASEAN and the world’s top ten countries, Scopus database
(SCImago) was used to compare overall documents and citations, cited documents, un-cited documents,
self-citations, citations per document and international collaboration from 1996 to 2011. Moreover, the World
Bank database (World Data Bank) was used to collect data for GDP and the number of internet users in these
countries.
To find the top ten countries around the world, we used the human development index (HDI) (Anad & Sen,
161
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
1994). Moreover, to find the 10 first universities in Malaysia, we used the number of publications in Scopus
database. Furthermore, to evaluate these universities, the number of published articles, conferences, reviews, and
letters for the same periods was collected (1996-2011).
2.1 Data Analysis
Prior to data analysis, normality test was done for the obtained data. The normality test revealed that all variables
were not distributed normally. So, to answer the research questions, non parametric test was used. To evaluate
the relationship among related items of publication, GDP and Internet usage, Spearman correlation coefficient
test was applied. Moreover, to compare ASEAN countries with the top ten countries, Mann–Whitney U test was
used. For evaluating the differences in number of documents, citations, self-citations, and citations per document
among ASEAN countries, Kruskal-Wallis test was applied. The Kruskal-Wallis test is a nonparametric test that
compares three or more unmatched groups and it is an extension of the Mann-Whitney U test to allow the
comparison of more than two independent groups.
3. Results
Based on human development index (HDI), the top ten countries around the world are listed as follow: Norway,
Australia, USA, The Netherlands, Germany, New Zealand, Ireland, Sweden, Switzerland, and Japan.
Table 1. Comparison between ASEAN countries and the 10 first countries around the world (publication
productivity and country ranking)
ASEAN
TOP
TEN
Country ranking*
Overall Documents
(1996_2011) in Scopus
publication
H-Index
Brunei
1064
133
140
Cambodia
1296
128
112
Indonesia
16139
63
57
Laos
853
141
135
Malaysia
75530
42
54
Myanmar
906
139
146
Philippines
11326
70
56
Singapore
126881
32
30
Thailand
69637
43
39
Vietnam
13172
67
60
Norway
141143
31
21
Australia
592533
11
11
USA
6149455
1
1
Netherland
487784
14
8
Germany
1581429
5
3
New Zealand
114495
34
26
Ireland
91125
39
27
Sweden
337135
18
10
Switzerland
350253
17
9
Japan
1604017
4
6
Country
* based on 238 countries in the world.
162
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
Table 2. Kruskal Wallis test
ASEAN countries
Brunei
Cambodia
Indonesia
Laos
Malaysia
Myanmar
Philippines
Singapore
Thailand
Vietnam
ASEAN countries
Brunei
Cambodia
Indonesia
Laos
Malaysia
Myanmar
Philippines
Singapore
Thailand
Vietnam
ASEAN countries
Brunei
Cambodia
Indonesia
Laos
Malaysia
Myanmar
Philippines
Singapore
Thailand
Vietnam
ASEAN countries
Brunei
Cambodia
Indonesia
Laos
Malaysia
Myanmar
Philippines
Singapore
Thailand
Vietnam
Documents (Mean)
66.5
81
1008.688
53.3125
4720.625
56.625
707.875
7930.063
4352.313
823.25
Overall Citation (Mean)
484.0625
801.1875
7406.875
478.1875
17601.19
428.0625
7586.875
83167.75
33060.56
6725.25
Self-citation (Mean)
43.4375
88.0625
857.4375
64.875
4345.5625
33.625
866.0625
11939.563
5831.5
789.25
Citation per document
(Mean)
9.5081
12.2075
10.8456
11.8269
7.64
11.5994
12.9463
12.5063
10.6606
11.755
SD
32.26
64.46
663.05
45.95
5443.05
41.21
283.60
3940.70
2963.70
609.77
SD
194.40
614.26
2823.18
400.11
5917.94
245.58
2320.56
32859.46
14704.18
2789.02
SD
19.69
72.82
223.05
50.88
2483.06
18.62
240.79
4584.45
2407.33
394.57
Chi square*
p.value
138
<0.01
Chi square*
p.value
140.628
<0.01
Chi square*
p.value
142.822
<0.01
SD
Chi square*
p.value
5.37
6.05
5.18
5.86
3.98
6.61
6.20
5.02
4.97
5.75
13.099
<0.01
To investigate the differences among ASEAN countries, Kruskal Wallis test was applied. The results for all these
indices showed that there were significant differences among these countries.
163
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
As Tables 1 and 2 show, Singapore had the highest publications (126881 documents) and Laos had the lowest
publications (853 documents) among ASEAN countries, while Malaysia ranking (with 75530 documents) was
the second and the forty second among ASEAN and all the world countries, respectively (based on 238 countries
in the world). Laos with 853 documents, and the publication ranking of 141 from 1996 to 2011 was the last
country in this region. The USA was the first country in the world with the highest number of publications
(6149455), H-index (1035) and overall citation (114546415) (see Table 1).
Interestingly, Malaysia had the fewest citations per document (overall 7.64 cite per document) and the least
collaboration with the world (37%) among ASEAN countries, whereas Laos had the most international
collaboration (91%) in this region from 1996 to 2011 (see Figure 1 and Table 2). Furthermore, Malaysia and the
United States had the highest self-citations with 24 % and 48% of all the citations that were relieved during that
period among ASEAN and the top ten countries in the world (see Table 2 and Figure 1). Moreover, Malaysia had
the least collaboration with the world in the last 16 years among ASEAN countries (1996-2011). As Table 2
shows, there are significant differences among the number of documents (H(2) = 138, p < 0.01), citations (H(2)
= 140.62, p<0.01), self-citations (H(2) = 142.82, p<0.01), and citations per document (H(2) = 13.09, p<0.01)
among ASEAN countries.
Publication trends in ASEAN countries are promising in comparison with the top ten countries in the world (see
Figure 2 and Table 3). There was a significant difference (p<0.01) between ASEAN and developed countries in
number of documents, citable documents, citations (overall), self-citations, cited documents and research
collaboration with the world. The rate of growth (Figure 2) in scientific publication was 24% in Cambodia (the
highest in ASEAN), while Japan had 1.4% of growth in publication among these countries.
To evaluate the relationship among related items of publication productivity and GDP and Internet usage,
Spearman correlation coefficient was applied. Table 4 shows the correlation between indices and GDP and
internet usage for ASEAN and the top ten countries separately. The results showed that there was a positive and
significant relationship between GDP and the number of publications, citable documents, citation, self-citation
and cited documents in ASEAN and the top ten countries in the world. However, one variable of international
collaboration had a significant and negative relationship with GDP.
Regarding the relationship between internet usage and all the variables, we found a significant and positive
relationship in ASEAN countries except for the variable of international collaboration. However, there was a
positive and significant (p<0.01) relationship between international collaboration and internet usage in the top
ten countries.
Prior to data analysis, normality test was conducted. The results revealed that all variables were not distributed
normally so that we used Kruskal Wallis test, a non-parametric test, to examine publication productivity among
the top ten Malaysian universities. Table 5 shows the mean and standard deviation for all types of publications
among the top ten universities in Malaysia. The results of Kruskal Wallis test confirmed a significant difference
among these universities. The institution with the first ranking in Malaysia was University of Malaya with 13105
publications from 1996 to 2011. The main field of research in this university was physics and astronomy with
15 % of all publications. et al.,Moreover, the result of Spearman correlation coefficient test showed a positive
and significant relationship among the number of articles, conference papers, reviews and letters published
during these years. Except the number of conference papers and letters, all other coefficients showed a strong
relationship.
Table 3. Comparison between ASEAN and the top 10 countries for publication
REGION
ASEAN
TOP TEN COUNTRIES
Mean
SD
Mean
SD
Z
P value
Documents
1980.0
3453.7
71558.6
112595.0
-14.5
<0.01
Citable Documents
1859.9
3263.2
67369.3
104758.0
-14.518
<0.01
Citation (overall)
15774.0
27026.4
1190700.0
2256220.0
-14.9
<0.01
Self-citation
2485.9
4098.7
446135.2
1079040.0
-14.86
<0.01
Cited Documents
1277.8
2021.3
53836.4
84267.2
-14.748
<0.01
International
Collaboration
63.2
19.4
39.8
11.2
-10.206
<0.01
To compare ASEAN countries and the top ten countries, Mann–Whitney U test was applied. The results showed
significant differences for all indices
164
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
Table 4. Spearman correlation coefficient
REGION
Index
GDP
Internet users
Documents
.853**
.681**
Citable Documents
.854**
.669**
Citation (overall)
.699**
.511**
Self-citation
.737**
.534**
Cited Documents
.823**
.646**
% International Collaboration
-.502**
-.595**
Documents
.972**
.201*
Citable Documents
.973**
.199*
Citation (overall)
.757**
-.224**
Self-citation
.780**
-.165*
Cited Documents
.940**
0.121
% International Collaboration
**. Correlation is significant at the 0.01 level (2-tailed).
-.407**
.557**
ASEAN
TOP TEN
*. Correlation is significant at the 0.05 level (2-tailed).
Table 5. Kruskl Wallis test was applied to examine publication productivity among the top ten Malaysian
universities
Article
Conference
Review
Letter
university
Mean
SD
Mean
SD
Mean
SD
Mean
SD
University of Malaya
638
618
125
152
32
34
9
6
Universiti Sains Malaysia
570
609
131
160
19
24
4
4
Universiti Putra Malaysia
570
645
114
143
20
24
1
2
Universiti Kebangsaan Malaysia
416
509
188
287
14
20
2
3
Universiti Teknologi Malaysia
180
262
210
282
6
12
0
0
Universiti Teknologi MARA
103
153
150
281
4
7
0
1
International Islamic University
Malaysia
98
131
57
92
4
4
1
2
Multimedia University
111
100
89
87
4
5
0
0
Universiti Teknologi Petronas
51
98
85
155
2
4
0
0
Universiti Malaysia Perlis
32
58
43
70
1
2
0
0
Chi square
84
22
73
90
P value
<0.01
<0.01
<0.01
<0.01
Article
Conference
Review
Conference
.817**
Review
.894**
.717**
Letter
.746**
.488**
Spearman’s rho Correlation Coefficients
165
.741**
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
A
B
C
D
Figure 1. Comparison among ASEAN and World's Best Countries regarding self citation (A); Citation per
document (B); Un-cited documents (C); and International collaboration (D)
166
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
A
B
C
Figure 2. Publication trend in ASEAN countries and World's Best Countries (A); Publication trend in Malaysia
(B); Citation per document in Malaysia (C)
167
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
4. Discussion and Conclusion
In the current study, we made a comparison in Scopus database (SCImago) between ASEAN and the top 10
countries in the world regarding the relationship among economic growth, internet usage and publication
productivity from 1996 to 2011. In addition, we made a comparison among the first 10 Malaysian universities.
We used SJR, or SCImago Journal and Country Rank, as a measure of the scientific prestige of scholarly sources
derived from the Google Page Rank™ algorithm. It attempts to even out the differences in citation practice in
different subject fields and facilitate direct comparisons of sources. Furthermore, the SCImago Journal and
Country Rank portal displays journal and country scientific indicators developed from the information contained
in the Scopus® database. These indicators can be used to assess and analyze scientific domains.
Publication productivity could be a good indicator of research output and used to rank countries, research
institutes, or researchers in different fields (Liu & Cheng, 2005; Meho & Spurgin, 2005; Narin & Hamilton,
1996; Toutkoushian et al., 2003; Yazit & Zainab, 2007). The impact of a publication is assessed in terms of the
number of citations that it has received in relation to other outputs in the journal (Yi et al., 2008). However, the
number of times cited for an article is highly correlated with the length of time since its publication (Yi et al.,
2008). There was a significant difference (p < 0.01) between citation per document (Yi et al., 2008; Calver &
Bradley 2009) in ASEAN region (11.1 citation per document) and developed countries (17.7) (see Table 3).
Malaysia got 31.5 % and 57.1 % less citation per document in comparison with the average of citation per
document in ASEAN and top ten developed countries respectively. However, Malaysia had the highest
self-citation in this region with almost 24 % of all citations which were received during this period. The United
States had the highest number of publications as well as the highest self-citation (48%) in the world. Malaysia
had the least collaboration with the world in comparison with other ASEAN countries and it could be one of the
reasons for receiving less citation per document (Ale Ebrahim et al., 2013). Based on the literature, the mean
value of citation per publication of collaborative papers was higher than that of single country publications.
Publication trends in ASEAN countries with 10.1% growth are promising in comparison with the developed
countries studied in our research (2.3%). Figure 2 represents the publication trend and the number of
publications in Malaysia between 1996 and 2011. The number of publications increased dramatically after 2007
with 4783 to 18875 documents in 2011 (almost 4 times). However, the number of citations decreased rapidly
from 5.09 to 0.47 citations per document during these years (11times). One of the main reasons for the increase
in the number of publications in Malaysia could be focusing heavily on increasing the quality of research in its
research universities such as University of Malaya. Malaysia spends 1% of its gross domestic product on
development and research projects as determined in the 10th Malaysian Plan. Moreover, increasing the number
of PhD students and changing their program from conventional dissertations to paper-based dissertations (Its
requirement is publication in high peer-reviewed journals) could be another reason. The number of PhD students
in Malaysia has increased 10 times in recent years (from about 4,000 students in 2002 to almost 40,000 in 2012).
Table 5 shows the 10 first universities in Malaysia. The first top five universities have been chosen as research
universities and received additional government funding. Two third (68.8 %) of all Malaysian publications have
been produced by these research universities from 1996 to 2011.
Chemistry, Physics, and Astronomy are the first fields regarding the number of publications and citations.
Institutes that are working in these fields could publish more papers and get more citations. Malaysian
universities mostly work on Engineering and Medicine in comparison with the top universities located in the top
countries such as the United States.
Our findings in this study showed a positive and significant relationship between GDP and the number of
publications. This finding was similar to the previous researches that found positive and significant relationship
between education and economic growth (Mankiw et al., 1992; Barro, 1991; Bils & Klenow, 2000). Moreover, a
recent research study by Jin and Jin (2013) indicated that publication productivity in different fields has
dissimilar effects on economic growth. Surprisingly, for one variable i.e., international collaboration, we found
that there is a significant and negative relationship between international collaboration and the GDP in ASEAN
countries, while the number of people with access to the World Wide Web in developed countries had significant
and positive relationship with research collaboration projects. The countries with more economic growth like the
United States had less research collaboration with other counties.
Publication trends in ASEAN countries and especially in Malaysia (20.14 times increased) are encouraging in
comparison with the top ten developed countries. Nevertheless, these publications could not get reasonable
citations. Researchers, science managers, as well as policy makers should try to find different ways to increase
citation per document (Ale Ebrahim et al., 2013) such as improving the quality and visibility of their researches
168
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
and invest more on research studies whose outputs lead to more rapid economic growth.
Acknowledgements
The authors wish to extend their best gratitude to Mrs. Elham Sadat Yahyavi, and Mrs. Rezvan ...... for their kind
assistance in this project.
References
Aghaei Chadegani, A., Salehi, H., Md Yunus, M., Farhadi, H., Fooladi, M., Farhadi, M., & Ale Ebrahim, N.
(2013). A comparison between two main academic literature collections: Web of Science and Scopus
databases. Asian Social Science, 9(5), 18-26. http://dx.doi.org/10.5539/ass.v9n5p18
Ale Ebrahim, N., Salehi, H., Embi, M. A., Habibi, T. F., Gholizadeh, H., & Motahar, S. M. (2013). Effective
Strategies for Increasing Citation Frequency. International Education Studies, 6, 93-99.
http://dx.doi.org/10.5539/ies.v6n11p93
Anand, S., & Sen, A. (1994). Human development index: methodology and measurement. Human Development
Report Office (HDRO), United Nations Development Programme (UNDP).
Bar-Ilan, J. (2008). Which h-index?—A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2),
257-271. http://dx.doi.org/10.1007/s11192-008-0216-y
Barro, R. J. (1991). Economic growth in a cross section of countries. The quarterly journal of economics, 106(2),
407-443. http://dx.doi.org/10.2307/2937943
Becker, G. S. (2009). Human capital: A theoretical and empirical analysis, with special reference to education.
University of Chicago Press.
Bils, M., & Klenow, P. J. (2000). Does schooling cause growth? American economic review, 1160-1183.
http://dx.doi.org/10.1257/aer.90.5.1160
Braun, T., Glänzel, W., & Schubert, A. (2006). A Hirsch-type index for journals. Scientometrics, 69(1), 169-173.
http://dx.doi.org/10.1007/s11192-006-0147-4
Brown, L. D., & Gardner, J. C. (1985). Using citation analysis to assess the impact of journals and articles on
contemporary accounting research (CAR). Journal of Accounting Research, 23(1), 84-109.
http://dx.doi.org/10.2307/2490908
Calver, M. C., & Bradley, J. S. (2009). Should we use the mean citations per paper to summarise a journal’s
impact or to rank journals in the same field? Scientometrics, 81(3), 611-615.
http://dx.doi.org/10.1007/s11192-008-2229-y
Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, web of
science, and Google scholar: strengths and weaknesses. The FASEB Journal, 22(2), 338-342.
http://dx.doi.org/10.1096/fj.07-9492LSF
Fooladi, M., Salehi, H., Md Yunus, M., Farhadi, M., Aghaei Chadegani, A., Farhadi, H., & Ale Ebrahim, N.
(2013). Does Criticisms Overcome the Praises of Journal Impact Factor? Asian Social Science, 9(5),
176-182. http://dx.doi.org/10.5539/ass.v9n5p176
Fox, M. F. (1983). Publication productivity among scientists: A critical review. Social Studies of Science, 13(2),
285-305. http://dx.doi.org/10.1177/030631283013002005
Gu, Y., & Zainab, A. N. (2001). Publication productivity of Malaysian researchers in the field of Computer
Science and Information Technology. Malaysian Journal of Library & Information Science, 6(1), 1-23.
Hart, R. L. (2000). Co-authorship in the academic library literature: a survey of attitudes and behaviors. The
Journal of Academic Librarianship, 26(5), 339-345. http://dx.doi.org/10.1016/S0099-1333(00)00140-3
Heeks, R., & Bailur, S. (2007). Analyzing e-government research: Perspectives, philosophies, theories, methods,
and
practice.
Government
Information
Quarterly,
24(2),
243-265.
http://dx.doi.org/10.1016/j.giq.2006.06.005
Jin, J. C., & Jin, L. (2013). Research publications and economic growth: evidence from cross-country
regressions. Applied Economics, 45(8), 983-990. http://dx.doi.org/10.1080/00036846.2011.613785
Joynt, R. L., & Leonard, J. A. (1980). Dantrolene sodium suspension in treatment of spastic cerebral palsy.
Developmental
Medicine
&
Child
Neurology,
22(6),
755-767.
http://dx.doi.org/10.1111/j.1469-8749.1980.tb03742.x
169
www.ccsenet.org/mas
Modern Applied Science
Vol. 8, No. 2; 2014
Kelly, J. C., Glynn, R. W., O’Briain, D. E., Felle, P., & McCabe, J. P. (2010). The 100 classic papers of
orthopaedic surgery a bibliometric analysis. Journal of Bone & Joint Surgery, British Volume, 92(10),
1338-1343. http://dx.doi.org/10.1302/0301-620X.92B10.24867
Lefaivre, K. A., & O’Brien, P. J. (2011). 100 most cited articles in orthopaedic surgery. Clinical Orthopaedics
and Related Research®, 469(5), 1487-1497. http://dx.doi.org/10.1007/s11999-010-1604-1
Liu, N. C., & Cheng, Y. (2005). The academic ranking of world universities. Higher education in Europe, 30(2),
127-136. http://dx.doi.org/10.1080/03797720500260116
Lucas Jr, R. E. (1988). On the mechanics of economic development. Journal of monetary economics, 22(1), 3-42.
http://dx.doi.org/10.1016/0304-3932(88)90168-7
Mankiw, N. G., Romer, D., & Weil, D. N. (1992). A contribution to the empirics of economic growth. The
quarterly journal of economics, 107(2), 407-437. http://dx.doi.org/10.2307/2118477
Meho, L. I., & Spurgin, K. M. (2005). Ranking the research productivity of library and information science
faculty and schools: An evaluation of data sources and research methods. Journal of the American Society
for Information Science and Technology, 56(12), 1314-1331. http://dx.doi.org/10.1002/asi.20227
Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and rankings of LIS faculty: Web of
Science versus Scopus and Google Scholar. Journal of the American Society for Information Science and
Technology, 58(13), 2105-2125. http://dx.doi.org/10.1002/asi.20677
Narin, F., & Hamilton, K. S. (1996). Bibliometric performance measures. Scientometrics, 36(3), 293-310.
http://dx.doi.org/10.1007/BF02129596
Nelson, R. R., & Phelps, E. S. (1966). Investment in humans, technological diffusion, and economic growth. The
American Economic Review, 69-75.
Ponce, F. A., & Lozano, A. M. (2010). Highly cited works in neurosurgery. Part I: the 100 top-cited papers in
neurosurgical
journals:
A
review.
Journal
of
Neurosurgery,
112(2),
223-232.
http://dx.doi.org/10.3171/2009.12.JNS091599
Romer, P. M. (1990). Endogenous technological change. Journal of Political Economy, S71-S102.
http://dx.doi.org/10.1086/261725
Sarel, M. (1997). Growth and productivity in ASEAN countries: International Monetary Fund.
Schubert, A., & Glänzel, W. (2007). A systematic analysis of Hirsch-type indices for journals. Journal of
Informetrics, 1(3), 179-184. http://dx.doi.org/10.1016/j.joi.2006.12.002
Toutkoushian, R. K., Porter, S. R., Danielson, C., & Hollis, P. R. (2003). Using publications counts to measure
an institution's research productivity. Research in Higher Education, 44(2), 121-148.
http://dx.doi.org/10.1023/A:1022070227966
Uzun, A. (2002). Productivity ratings of institutions based on publication in Scientometrics, Informetrics, and
Bibliometrics, 1981-2000. Scientometrics, 53(3), 297-307. http://dx.doi.org/10.1023/A:1014864827441
Yazit, N., & Zainab, A. N. (2007). Publication productivity of Malaysian authors and institutions in LIS.
Malaysian Journal of Library and Information Science, 12.
Yi, H., Ao, X., & Ho, Y. S. (2008). Use of citation per publication as an indicator to evaluate pentachlorophenol
research. Scientometrics, 75(1), 67-80. http://dx.doi.org/10.1007/s11192-007-1849-y
Zain, Z. M., Ishak, R., & Ghani, E. K. (2009). The influence of corporate culture on organisational commitment:
A study on a Malaysian listed company. European Journal of Economics, Finance and Administrative
Sciences, 17(17), 16-26.
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution
license (http://creativecommons.org/licenses/by/3.0/).
170
HOME | CURRENT | ARCHIVES | FORUM
Research World, Volume 10, 2013
Online Version
Article A10.4
Introduction to the Research Tools Mind Map
Nader Ale Ebrahim
Research Support Unit, Centre of Research Services
Institute of Research Management and Monitoring (IPPP)
University of Malaya, Malaysia
aleebrahim[at]siswa.um.edu.my
Published Online: June 14, 2013
With the increasing use of information and communications technology (ICT), researchers are able to use
computer software tools to find, organise, manipulate, analyse, and share relevant information. However,
there are hundreds of such tools to select from, for various research-related uses. I have collected over
700 tools that can help researchers do their work efficiently. It is assembled as an interactive Web-based
mind map, titled Research Tools, which is updated periodically. Created using the MindMeister software
platform, Research Tools provides a convenient interface with zoom in/out, screen drag, and other userfriendly features.
Being a mind map, Research Tools consists of a hierarchical set of nodes. It has four main nodes: (1)
Searching the literature, (2) Writing a paper, (3) Targeting suitable journals, and (4) Enhancing visibility
and impact, and six auxiliary nodes. In the mind map, each parent node has several child nodes
associated with it, which can be accessed by clicking on the plus (+) sign. Several free tools can be found
in the child nodes. Some paid tools are also included.
The four main nodes are described below, listing some tools as examples. In the mind map, a green tick
sign indicates my preference for some tools. Users can access the tools by clicking on the arrow sign that
appears next to the tool title.
1. Searching the Literature
Tools under this node would assist first-year research students and other early-stage researchers in
expanding their knowledge base and developing a research focus (Ale Ebrahim, 2013). A sample of
these tools is given below, grouped under the specific purposes for which these tools might be used:
(a) Find keywords: Google AdWords, Google Trends, Springer Realtime, KwMap
(b) Develop a new keyword/phrase: Microsoft Adcenter Lab, keyword density analyser, Free Keyword
Tool
(c) Find research materials: Web of Science, Scopus®, ProQuest, Research Papers in Economics,
Munich Personal RePEc Archive (MPRA)
(d) Evaluate a paper: Publish or Perish, Readermete, Microsoft Academic Research
(e) Access databases: Social Science Research Network, Refdoc, iSEEK Education, Database of
Electrical Engineering & Computer Science
(f) Compare resources: Top 30 Web Tools to Search, 100 Useful Tips and Tools to Research the Deep
Web
(g) Search visually: Kartoo, Redz
2. Writing a Paper
Several tools on research writing have been assembled under this node. These tools help researchers in
various aspects of scientific and scholarly writing. Here is a sample of such tools:
(a) Write and edit: Ginger, Spell Checker, Plagiarisma, Springer Exemplar
(b) Analyse and summarise texts: Smmry, Tools 4 Noobs, Free Summarizer
(c) Search desktop: DtSearch, Copernic Desktop Search
(d) Write collaboratively: Google Drive, Adobe acrobat professional
(e) Synthesise and visualise information: MindMeister, MindMaple, Text2mindmap
(f) Manage bibliographic data: Mendeley, EndNote, Zotero
(g) Build and share article collections: Bibliogo, WizFolio
(h) Detect and avoid plagiarism: Viper, The Plagiarism Checker, Turnitin, Paper Rater
3. Targeting Suitable Journals
Sometimes, researchers are not sure about the journal they should target for publishing their research
papers. They would like to evaluate a journal’s visibility and credibility before submitting their papers.
Researchers can use a range of tools to identify, assess, and select appropriate journals with a view to
enhancing their publication profile (Ale Ebrahim, 2010).
(a) Identify potential journals: Elsevier journal finder, JANE (from The Biosemantics Group), Springer
journal advisor
(b) Detect relevant journals: Google Alerts, My Citation Alerts (from Web of Knowledge)
(c) Evaluate journal quality (impact factor and journal ranking): Journal Citation Report (Thomson
Reuters), SJR (Scimago Lab), Eigenfactor® (University of Washington)
4. Enhancing Visibility and Impact
Publishing a research paper in a scholarly journals is necessary but not sufficient for receiving citations in
future. We need to ensure that the paper is visible to the relevant users and authors. There are tools that
help in enhancing the visibility and readership of research papers. Effective use of these tools can result
in increased citations and, thus, improve the h-index of the author (h-index shows the author’s influence
as a researcher, see Ale Ebrahim, 2012; Aghaei Chadegani et al., 2013). Here is a sample of tools to
increase the visibility of one’s published papers:
(a) Create an online CV: ResearcherID, Publications List, ORCID, Google Scholar My Citations, A visual
resume
(b) Create an online repository: CiteuLike, Getcited, PubZone, Social Science Open Access Repository,
arXiv, HAL
(c) Share bookmarks: Diigo, Delicious, BibSonomy, ScienceWISE
(d) Publish and share information: Scribd, Docstoc®, figshare, Udemy
(e) Network with other researchers: ResearchGate, Academia, LinkedIn
5. Auxiliary Nodes
Research tools which could not be categorised under the above four main nodes have been placed under
six auxiliary nodes. The auxiliary nodes are: Survey, Links, Keeping up-to-date (Alert service), h-index,
General, and Download. The tools under these nodes can assist researchers in keeping track of their
research interest and activities in various ways.
(a) Stay current with research: Science direct alerts, MIT press, PubMed, Conference Alerts, WikiCFP
(b) Collect data: SurveyMonkey, Formsite, Survey share, Google Docs, eSurveysPro, Free Online
Survey, KwikSurvey
(c) h-index: Prediction scientific h-index, Finding h-index in Web of Science, Optimizing your article for
search engines, 10 Ways to Increase Usage and Citation of your Article Using Social Media
6. Conclusion
We can save plenty of research time by using specific tools for specific purposes. This can facilitate quick
progress and early completion of research projects. In my experience, research students who use these
tools are able to complete their literature review in less than 3 months.
Acknowledgement
Sincere thanks to Professor D. P. Dash for his kind encouragement to write this article and his useful
advice on its organisation.
References
Aghaei Chadegani, A., Salehi, H., Yunus, M. M., Farhadi, H., Fooladi, M., Farhadi, M., & Ale Ebrahim, N.
(2013). A comparison between two main academic literature collections: Web of Science and Scopus
databases. Asian Social Science, 9(5), 18-26. Retrieved from http://dx.doi.org/10.5539/ass.v9n5p18
Ale Ebrahim, N. (2010). Target ISI journals: How to write/publish ISI papers [Presentation material].
Retrieved from http://works.bepress.com/aleebrahim/1/
Ale Ebrahim, N. (2012). Publication marketing tools: Enhancing research visibility and improving citations
[Presentation material]. Retrieved from http://works.bepress.com/aleebrahim/64/
Ale Ebrahim, N. (2013). The effective use of research tools and resources. [Presentation material].
Retrieved from http://works.bepress.com/aleebrahim/73/
Suggested Citation: Ale Ebrahim, N. (2013). Introduction to the Research Tools mind map. Research
World, 10, Article A10.4. Retrieved from http://www1.ximb.ac.in/RW.nsf/pages/A10.4
Copyleft
The article may be used freely, for a noncommercial purpose, as long as the original source is
properly acknowledged.
Xavier Institute of Management, Xavier Square, Bhubaneswar 751013, India
Research World (ISSN 0974-2379) http://www1.ximb.ac.in/RW.nsf/pages/Home
American J. of Engineering and Applied Sciences 5 (1): 9-14, 2012
ISSN 1941-7020
© 2012 Science Publications
Technology Use in the Virtual R&D Teams
1
Nader Ale Ebrahim, 2Shamsuddin Ahmed,
Salwa Hanim Abdul Rashid and 2Zahari Taha
1
Department of Engineering Design and Manufacture,
Faculty of Engineering, University of Malaya Kuala Lumpur, Malaysia
2
Faculty of Manufacturing Engineering and Management Technology,
University Malaysia Pahang, 26300 Gambang, Pahang, Malaysia
1
Abstract: Problem statement: Although, literature proves the importance of the technology role in
the effectiveness of virtual Research and Development (R&D) teams for new product development.
However, the factors that make technology construct in a virtual R&D team are still ambiguous. The
manager of virtual R&D teams for new product development does not know which type of technology
should be used. Approach: To address the gap and answer the question, the study presents a set of
factors that make a technology construct. The proposed construct modified by finding of the field
survey (N = 240). We empirically examine the relationship between construct and its factors by
employing the Structural Equation Modeling (SEM). A measurement model built base on the 19
preliminary factors that extracted from literature review. The result shows 10 factors out of 19 factors
maintaining to make technology construct. Results: These 10 technology factors can be grouped into two
constructs namely Web base communication and Web base data sharing. The findings can help new product
development managers of enterprises to concentrate in the main factors for leading an effective virtual R&D
team. In addition, it provides a guideline for software developers as well. Conclusion: The second and third
generation technologies are now more suitable for developing new products through virtual R&D teams.
Key words: Collaboration teams, questionnaires performance, cross-functional teams, product
development, structural equation modeling, measurement model, literature review
even around the world. Therefore, enterprises have no
choice but to disperse their new product units to access
such dispersed knowledge and skills (Kratzer et al.,
2005). As a result, enterprises are finding that internal
development of all technology needed for new products
and processes are difficult or impossible. They must
increasingly receive technology from external sources
(Stock and Tatikonda, 2004).
Virtualization in NPD has recently started to make
serious headway due to developments in technologyvirtuality in NPD which is now technically possible
(Leenders et al., 2003). As product development
becomes the more complex, supply chain, also have to
collaborate more closely than in the past. These kinds
of collaborations almost always involve individuals
from different locations, so virtual team working
supported by Information Technology (IT), offers
notable potential benefits (Anderson et al., 2007).
Although the use of the internet in NPD has received
notable attention in the literature, little is written about
collaborative tool and effective virtual teams for NPD
(Ebrahim et al., 2009a). In addition, literature shows the
INTRODUCTION
Virtual teams are defined as “small temporary
groups of geographically, organizationally and/or time
dispersed knowledge workers who coordinate their
work, mainly with electronic information and
communication technologies to carry out one or more
organization tasks” (Ebrahim et al., 2009b). Virtual
R&D team is a form of a virtual team, which includes
the features of virtual teams and concentrates on R&D
activities (Ebrahim et al., 2011). The members of a
virtual R&D team use different degrees of
communication technology to complete the research
without space, time and organizational boundaries
(Nader et al., 2010a, Husain and Yong, 2009). “We are
becoming more virtual all the time!” is heard in many
global corporations today (Chudoba et al., 2005). On
the other hand, New Product Development (NPD) is
widely recognized as a key to corporate prosperity
(Lam et al., 2007). The specialized skills and talents
needed for developing new products often remain
locally in pockets of excellence around the company or
Corresponding Author: Nader Ale Ebrahim, Department of Engineering Design and Manufacture, Faculty of Engineering,
University of Malaya Kuala Lumpur, Malaysia
9
Am. J. Engg. & Applied Sci., 5 (1): 9-14, 2012
factors that make technology construct in a virtual R&D
team are still ambiguous. I this study we try to fill the
gap in the literature.
This study is structured as follows. First, base on
prior research we extract the 19 factors of technology
construct in the virtual R&D teams. Next, Structural
Equation Modeling (SEM) is used as the analytical tool
for testing the estimating and testing the technology
construct measurement models. Then adjust the
preliminary technology construct the model by fitting
the model according to the SEM fitness indices and
made a final measurement model. The study infers with
a discussion and future guidelines.
Factors that make technology construct in a virtual
R&D team are still ambiguous. We extracted 19importance factors related to the technology construct,
base on a comprehensive review on technology view in
the virtual R&D team working. Table 1 summarized the
factors and their supported references. E-mails and
conference calls are generally known as first generation
technologies while online discussion boards, power
point presentations, video tools and online meeting
tools are second-generation technologies. Third
generation technology refers typically to web-enabled
shared workspaces with the intranet or internet (LeeKelley and Sankey, 2008).
Literature review: Virtual teams use digital
communications, video and audio links, electronic
whiteboards, e-mail, instant messaging, websites, chat
rooms, as substitutes for physical collocation of the
team members (Baskerville and Nandhakumar, 2007,
Pauleen and Yoong, 2001). Simple transmission of
information from point A to point B is not enough; the
virtual environment presents significant challenges to
effective communication (Walvoord et al., 2008). Being
equipped with even the most advanced technologies are
not enough to make a virtual team effective, since the
internal group dynamics and external support
mechanisms must also be present for a team to succeed in
the virtual world (Lurey and Raisinghani, 2001). Virtual
teams are technology-mediated groups of people from
different discipline that work on common tasks (Dekker et
al., 2008) so the way the technology is implemented seems
to make the virtual teams outcome more or less likely
(Anderson et al., 2007). Virtual R&D team instructor
should choose the suitable technology based on the
purpose of the team (Ebrahim et al., 2009c).
Research method: To build a measurement model of
technology construct in virtual R&D teams for new
product development, we conducted a web-based
survey mainly in Malaysian and Iranian manufacturing
enterprises, in a random sample of small and medium
enterprises. Web-based survey method is selected
because; it is a cost-effective and quick result to get
feedback from the belief of the respondent. A Likert
scale from one to five was used. This set up gave
respondents a series of attitude dimensions. For each
factor, the respondent was asked whether, the factor is
not important or extremely important by using a Likert
scale rating. The questionnaire was emailed to the
managing director, R&D manager, the new product
development manager, project and design manager and
suitable people who were most familiar with the R&D
activities in the firm. The rapid expansion of Internet
users has given web-based surveys the potential to
become a powerful tool in survey research (Sills and
Song, 2002, Nader et al., 2010b).
Table 1: Summary of the factors related to the technology construct in the virtual teams
Factor name Factor descriptions
References
Tech1
Use internet and electronic mail
(Redoli et al., 2008, Pauleen and Yoong, 2001,
Lee-Kelley and Sankey, 2008, Thissen et al., 2007)
Tech2
Online meeting on need basis
(Chen et al., 2007; Lee-Kelley and Sankey, 2008;
Pena-Mora et al., 2000; Thissen et al., 2007)
Tech3
Web conferencing
(Coleman and Levine, 2007; Thissen et al., 2007,
Zemliansky and Amant, 2008; Ebrahim et al., 2009c)
Tech4
Seminar on the Web
(Zemliansky and Amant, 2008)
Tech5
Shared work spaces
(Lee-Kelley and Sankey, 2008)
Tech6
Video conferencing
(Chen et al., 2007; Zemliansky and Amant, 2008)
Tech7
Audio conferencing
(Chen et al., 2007; Lee-Kelley and Sankey, 2008; Zemliansky
Tech8
Online presentations
(Lee-Kelley and Sankey, 2008) and Amant, 2008)
Tech9
Share documents (off-line)
(Coleman and Levine, 2007; Ebrahim et al., 2009c)
Tech10
Share what’s on your computer desktop with people in
other locations (Remote access and control)
(Thissen et al., 2007; Ale et al., 2009)
Tech11
Do not install engineering software
(Coleman and Levine, 2007; Kotelnikov, 2007,
(get service through web browser)
Vasileva, 2009)
Tech12
Access service from any computer (in Network)
(Thissen et al., 2007; Vasileva, 2009)
Tech13
Standard phone service and hybrid services
(Thissen et al., 2007; Ebrahim et al., 2009c)
Tech14
Access shared files anytime, from any computer
(Lee-Kelley and Sankey, 2008)
Tech15
Web database
(Coleman and Levine, 2007; Zemliansky and Amant, 2008;
Ebrahim et al., 2009c)
Tech16
Provide instant collaboration
(Coleman and Levine, 2007; Thissen et al., 2007)
Tech17
Software as a service (canceling the need to install and run
the application on the own computer)
(Coleman and Levine, 2007; Thissen et al., 2007)
Tech18
Virtual research center for product development
(Zemliansky and Amant, 2008)
Tech19
Can be integrated/compatible with the other tools and systems
(Coleman and Levine, 2007; Kotelnikov, 2007)
10
Am. J. Engg. & Applied Sci., 5 (1): 9-14, 2012
Invitation e-mails were sent to each respondent,
reaching 972 valid email accounts, with reminders
following every two weeks up to three months. 240
enterprises completed the questionnaire, for an overall
response rate of 24.7% Table 2.
Convergent validity was established using a
calculation of the factor loading, Average Variance
Extracted (AVE) and Composite Reliability (CR). The
factors that have standardized loadings exceeded 0.50,
were maintained (Dibrell et al., 2008). The initial
measurement model was consisting of 19 factors (Tech1
to Tech19). After revising the measurement model by
deleting Tech1, Tech10, Tech11 and Tech13, the AVE
and CR were calculated. AVE larger than 0.5 is the
threshold (McNamara et al., 2008). CR is calculated by
squaring the sum of loadings, then dividing it by the sum
of squared loadings, plus the sum of the measurement
error (Lin et al., 2008). CR should be greater than 0.6
(Huang, 2009). The measurement model had acceptable
convergent validity since the calculated CR and AVE
were 0.930 and 0.613 respectively.
For discriminant validity, we performed AMOS
software using Maximum Likelihood method (ML).
The fitting indices checked with their respective
acceptance values Table 4. We run the AMOS for the
model Ver1 (technology construct with 15 factors) and
found a nonsignificant chi-square per degrees of
freedom (CMIN/DF = 7.232). Most of the rest of fit
indices was not in the acceptable range.
RESULTS
Anderson and Gerbing (1988) suggested using
Confirmatory Factor Analysis (CFA) for scale
development because it affords stricter interpretation of
unidimensionality than what is provided by more
traditional approaches, such as coefficient alpha, itemtotal correlations and exploratory factor analysis. The
evidence that the measures were one-dimensional,
where a set of indicators (factors) shares only a single
underlying construct, was assessed using CFA
(Anderson and Gerbing, 1988). After data collection,
the measures purification procedures should be used to
assess their reliability, unidimensionality, discriminate
validity and convergent validity (Anderson and
Gerbing, 1988).
For reliability analysis, Cronbach’s Alpha
(Cronbach, 1951) was employed to each factor. As
shown in Table 3, all the items with Cronbach’s α
greater than threshold 0.6 were included in the analysis
and the rest omitted from analysis. So, the factors
Tech1, Tech10, Tech11 and Tech13 freed from further
analysis. In general, the reliability of the questionnaire’s
instruments displayed a good reliability across samples.
Structural Equation Modeling (SEM) using AMOS
18 was employed for validation of the measurement
model. This statistical analysis are estimated
simultaneously for both the measurement and structural
models (Dibrell et al., 2008). To ensure the factors
make a right construct, the measurement model
examined for model fit. Given this, the model assessed
for the convergent and discriminant validity.
Table 2: Summarized online survey data collection
Numbers of emails sent enterprises
Total responses (Click the online web page)
Total responses / received questionnaire (%)
Total completed
Total completed / received questionnaire (%)
Table 3: Summary of the final measures and reliabilities
Corrected itemCronbach’s alpha
Factor name
total correlation
if Item deleted
Tech1
0.525
0.943
Tech2
0.755
0.939
Tech3
0.777
0.939
Tech4
0.717
0.940
Tech5
0.759
0.939
Tech6
0.722
0.940
Tech7
0.731
0.939
Tech8
0.780
0.939
Tech9
0.610
0.942
Tech10
0.576
0.942
Tech11
0.571
0.943
Tech12
0.686
0.940
Tech13
0.519
0.943
Tech14
0.624
0.941
Tech15
0.696
0.940
Tech16
0.642
0.941
Tech17
0.678
0.940
Tech18
0.649
0.941
Tech19
0.615
0.942
3625
972.0
26.8
240.0
24.7
Table 4: Fitting indices (adopted from (Byrne, 2001)
Fit Indices
Desired range
χ2 /degrees of freedom (CMIN/DF) ≤ 2.00
IFI (Incremental Fit Index)
≥ 0.90
Comparative Fit Index (CFI)
Coefficient values range from zero to 1.00, with values close to 0.95 showing superior fit
RMSEA (Root Mean Squire
values less than .05 show good fit and values as high as .08 represent reasonable fit,
Error of Approximation)
from 0.08-0.10 show mediocre fit and those greater than 0.10 show poor fit
Root mean square residual (RMR) ≤ 0.08
Goodness-of-Fit Index (GFI)
≥ 0.90
Normed Fit Index (NFI)
Coefficient values range from zero to 1.00, with values close to 0.95 showing superior fit
Relative Fit Index (RFI)
Coefficient values range from zero to 1.00, with values close to 0.95 showing superior fit
Tucker-Lewis Index (TLI)
Values ranging from zero to 1.00, with values close to 0.95 (for large samples) being indicative of good fit
11
Am. J. Engg. & Applied Sci., 5 (1): 9-14, 2012
Fig. 2: Final measurement model
Fig. 1: Measurement model Ver2
The results of the final measurement model of
technology construct in virtual R&D team for
developing a new product, shows the share of two main
contrasts, which are strongly correlated to each other:
Thus, refer to the AMOS Modification Indices (MI)
some of the factors that had the lowest factor loading or
the same effect of remaining factor, were deleted. With
this modification, the measurement model Ver2 had a
significant chi-square per degrees of freedom
(CMIN/DF = 4.767); other fit indices, RMSEA, RMR
and GFI also were in the acceptable range. Therefore,
the best fitting model was the measurement model Ver2
Fig. 1 and it used for further analysis.
•
•
DISCUSSION
The final measurement developed made base on the
measurement model ver2 by classifying the factors in
two groups according to their relevant factor loading
with the threshold 0.83. The proper name for each
group can be web base; communications and data
sharing respectively. As displayed in Fig. 2 each factor
loading was above 0.62 and significant. Overall, the
final measurement model produced good fit indices
(CMIN/DF = 2.889, RMR = .04, GFI = 0.929, RFI =
0.929, NFI = 0.949, TLI = 0.952, CFI = 0.966 IFI =
0.964, RMSEA = 0.089).
While fitting the technology construct the
measurement model the factors Tech14 (access shared
files anytime, from any computer), Tech15 (web
database), Tech16 (provide instant collaboration),
Tech17 (software as a service (eliminating the need to
install and run the application on the own computer))
and Tech19 (can be integrated/compatible with the
other tools and systems) were dropped. Modification
indices (MI) base on regression weights shows Tech17,
Tech 18 and Tech19 are highly correlated, so one
representative (Tech18) from this group is enough.
Tech14 to Tech16 are strongly correlated with Tech12,
so the remaining factor represents the deleted ones.
Web base communications consists of online
meeting on needed basis, web conferencing,
seminar on the web, video conferencing, audio
conferencing and online presentations
Web base data sharing consists of shared work
spaces, share documents (off-line), access service
from any computer (in network) and virtual
research center for product development
According to Lee-Kelley and Sankey (2008) these
two constructs belong to the second and third
generation of technology. Equip virtual R&D team
members with the suitable technology make the teams
more effective. Therefore, the manager of NPD should
provide the facilities and infrastructures for the virtual
R&D teams to achieve the higher level of team
effectiveness.
CONCLUSION
Research so far has explored the 19 factors for
working together virtually; however, us still less know
about the factors which has main contributions in the
technology construct of the virtual R&D teams for New
product development. The findings of this study extend
the literatures and help to build a foundation for further
understanding of the technology elements in the virtual
R&D teams for new product development. The
measurement model shows ten factors that make the
technology constructs. These ten factors can be sorted
by their factor loading which are reflecting the factor
weight. Therefore, the software developer or the
managers of the NPD are able to provide a better
12
Am. J. Engg. & Applied Sci., 5 (1): 9-14, 2012
platform for virtual team working by concentrating on
the main factors. The second and third generation of
technology (refer to definition of Lee-Kelley and
Sankey (2008) is now more suitable for developing a
new product through virtual R&D teams.
Future research is needed to examine the effects of
each factor to perform the virtual R&D teams while the
others constructs of virtual teams such as process and
people are present. A new SEM is needed to
demonstrative the relationship between factorsconstructs and constructs-constructs which is not
investigated yet in the literature.
Dekker, D.M., C.G. Rutte and V.D.P.T. Berg, 2008.
Cultural differences in the perception of critical
interaction behaviors in global virtual teams. Int. J.
Intercu.
Relations,
32:
441-452.
DOI:
10.1016/j.ijintrel.2008.06.003
Dibrell, C., P.S. Davis and J. Craig, 2008. Fueling
Innovation through Information Technology in
SMEs. J. Small Bus. Manage., 46: 203-218. DOI:
10.1111/j.1540-627X.2008.00240.x
Ebrahim, N.A., S. Ahmed, Z. Taha, 2009a. Modified
stage-gate: A conceptual model of virtual product
development process. Afr. J. Marke. Manag, 1:
211-219.
Ebrahim, N.A., S. Ahmed, Z. Taha, 2009b. Virtual
R&D teams in small and medium enterprises: A
literature review. Sci. Res. Essay, 4: 1575-1590.
DOI: 10.2139/ssrn.1530904
Ebrahim, N.A., S. Ahmed, Z. Taha, 2009c. Virtual
teams: A literature review. Australian J. Basic
Applied Sci., 3: 2653-2669.
Ebrahim, N.A., S.H.A. Rashid, S. Ahmed, Z. Taha,
2011. The Effectiveness of Virtual R&D Teams in
SMEs: Experiences of Malaysian SMEs. Indu.
Eng. Manag. Syst., 10: 109-114.
Huang, C.C., 2009. Knowledge sharing and group
cohesiveness on performance: An empirical study
of technology R&D teams in Taiwan.
Technovation,
29:
786-797.
DOI:
1016/j.technovation.2009.04.003
Husain, A.J.A. and C.H. Yong, 2009. Utility-based
policy management system for virtual organization.
J.
Comput.
Sci.,
5:
635-645.
DOI:
10.3844/jcssp.2009.635.645
Kotelnikov, V., 2007. Kim Hak-Su Asia-Pacific
Development Information Programme e-Primers
for the Information Economy, Society and Polity.
1st Edn., ISBN: 9789748283937, pp: 27.
Kratzer, J., R.T.A.J. Leenders and J.M.L.V. Engelen,
2005. Keeping virtual R&D teams creative. CBS
Interactive.
Lam, P.K., K.S. Chin, J.B. Yang and W. Liang, 2007.
Self-assessment of conflict management in clientsupplier collaborative new product development.
Indus. Manag. Data Syst., 107: 688-714. DOI:
10.1108/02635570710750435
Lee-Kelley, L. and T. Sankey, 2008. Global virtual
teams for value creation and project success: A
case study. Int. J. Project Manag. 26: 51-62. DOI:
10.1016/j.ijproman.2007.08.010
Leenders, R.T.A.J., J.M.L.V. Engelen and J. Kratzer,
2003. Virtuality, communication and new product
team creativity: A social network perspective. J.
Eng. Technol. Manag., 20: 69-92. DOI:
10.1016/S0923-4748(03)00005-5
REFERENCES
Ale, E., N. Ahme and Z. Taha, 2009. Innovation and
Technology Facilitator University of Tehran.
Anderson, A.H., R. Mcewan, J. Bal and J. Carletta,
2007. Virtual team meetings: An analysis of
communication and context. Comput. Human Behav.,
23: 2558-2580. DOI: 10.1016/j.chb.2007.01.001
Anderson, J.C. and D.W. Gerbing, 1988. Structural
equation modeling in practice: A review and
recommended two-step approach. Psycholo. Bull.,
103: 411-423. DOI: 10.1037/0033-2909.103.3.411
Baskerville, R. and J. Nandhakumar, 2007. Activating
and perpetuating virtual teams: Now that we’re
mobile, where do we go? IEEE Trans. Profess.
Commun.,
50:
17-34.
DOI:
10.1109/TPC.2006.890849
Byrne, B.M., 2001. Structural equation modeling with
AMOS: Basic Concepts, Applications and
Programming. 1st Edn., Routledge, USA., ISBN10: 0805841040 pp: 352.
Chen, M., Y. Liou, C.W. Wang, Y.W. Fan and Y.P.J. Chi,
2007. TeamSpirit: Design, implementation and
evaluation of a Web-based group decision support
system. Decision Support Syst., 43: 1186-1202.
DOI: 10.1016/j.dss.2005.07.008
Chudoba, K.M., E. Wynn, M. Lu, B.M. WatsonManheim, 2005. How virtual are we? Measuring
virtuality and understanding its impact in a global
organization. Inform. Syst. J., 15: 279-306. DOI:
10.1111/j.1365-2575.2005.00200.x
Coleman, D. and S. Levine, 2007. Collaboration 2.0 :
technology and best practices for successful
collaboration in a web 2.0 world. 1st Edn., Happy
About, California, USA, ISBN-10: 1600050719
pp: 320.
Cronbach, L.J., 1951. Coefficient alpha and the internal
structure of tests. Psychometrika, 16: 297-334.
DOI: 10.1007/BF02310555
13
Am. J. Engg. & Applied Sci., 5 (1): 9-14, 2012
Sills, S.J. and C. Song, 2002. Innovations in survey
research an application of web-based surveys. Soc.
Sci.
Comp.
Rev.,
20:
22-30.
DOI:
10.1177/089443930202000103
Stock, G.N. and M.V. Tatikonda, 2004. External
technology integration in product and process
development. Int. J. Oper. Production Manage., 24:
642-665. DOI: 10.1108/01443570410541975
Thissen, M.R., J.M. Page, M.C. Bharathi and T.L.
Austin, 2007. Communication tools for distributed
software development teams. Proceedings of the
2007 ACM SIGMIS CPR Conference on Computer
Personnel Research: The Global Information
Technology Workforce. New York, DOI:
10.1145/1235000.1235007
Vasileva, S.E., 2009. Authority-based and bottom-up
diffusion
of
collaboration
information
technologies: Constraints and enablements. mba
(mgmt
information
systems).
Universitat
Koblenz-Landau.
Walvoord, A.A.G., E.R. Redden, L.R. Elliott and M.D.
Coovert, 2008. Empowering followers in virtual
teams: Guiding principles from theory and practice.
Comput. Human Behav., 24: 1884-1906. DOI:
10.1016/j.chb.2008.02.006
Zemliansky, P. and K.S. Amant, 2008. Handbook of
Research on Virtual Workplaces and the New
Nature of Business Practices. 1st Edn., IGI
Global Snippet, Hershey PA, ISBN-10:
159904871X, pp: 733.
Lin, C., C. Standing and Y.C. Liu, 2008. A model to
develop effective virtual teams. Decision Support
Syst., 45: 1031-1045. DOI: 10.1016/j.dss.2008.04.002
Lurey, J.S. and M.S. Raisinghani, 2001. An empirical
study of best practices in virtual teams. Inform.
Manage., 38: 523-544. DOI: 10.1016/S03787206(01)00074-X
Mcnamara, K., A.R. Dennis and T.A. Carte, 2008. It's
the thought that counts: The mediating effects of
information processing in virtual team decision
making. Inform. Syst. Manage., 25: 20-32. DOI:
10.1080/10580530701777123
Nader, A.E., A. Shamsuddin and T. Zahari, 2010b.
Virtual R&D teams and SMEs growth: A
comparative study between Iranian and Malaysian
SMEs. Afr. J. Bus. Manage., 4: 2368-2379.
Nader, A.E., A. Shamsuddin, A. Rashid, S. Hanim and
T., Zahari, 2010a. The Effectiveness of Virtual
R&D Teams in SMEs: Experiences of Malaysian
SMEs. Proceedings of the 11th Asia Pacific
Industrial Engineering and Management Systems
Conference Dec. 9-9, Melaka, Malaysia. Kuala
Lumpur, Malaysia: University of Malaya Press, pp:
1-6.
Pauleen, D.J. and P. Yoong, 2001. Facilitating virtual
team relationships via Internet and conventional
communication channels. Int. Res., 11: 190-202.
DOI: 10.1108/10662240110396450
Pena-Mora, F., K. Hussein, S. Vadhavkar and K.
Benjamin, 2000. CAIRO: A concurrent engineering
meeting environment for virtual design teams.
Artif. Intell. Eng., 14: 203-219. DOI:
10.1016/S0954-1810(00)00016-9
Redoli, J., R. Mompo, J. Garcia-Diez and M. LopezCoronado, 2008. A model for the assessment and
development of Internet-based information and
communication services in small and medium
enterprises. Technovation, 28: 424-435. DOI:
10.1016/j.technovation.2008.02.008
14