This website may not work correctly because your browser is out of date. Please update your browser .

Newsletters, bulletins, blogs, briefs and brochures

Short communication formats—such as bulletins, briefs, newsletters, blogs and brochures—can be used to highlight particular findings or angles on the evaluation.

Shorter communication formats can complement longer formats such as evaluation reports and help generate interest in learning more about the full evaluation findings, as well as attracting the attention of a wider audience. They can be used to invite feedback (particularly if they are available on-line), provide updates, report upcoming evaluation events, or present preliminary or final findings. 

These short communications differ in the following ways:

  • Bulletin or Brief : This offers a very short, “brief” format – either a frequently-circulated update on project progress or a short presentation of evaluation results. It can also be used to present changes decided - for example in a “policy brief” or a bulletin summarising “lessons learned”.
  • Newsletter : This is longer than the other types and generally follows a newspaper format, following a regular theme and at regular intervals. This can include articles about evaluation findings and more analytical themes.
  • Blog : The “weblog” is more of an informal discussion piece that can probe more deeply into analysis of a particular question or finding from the evaluations - for example as part of a series of “Stories from the field” (see more on Blogs under Website communicationsLINK).
  • Brochure : Simple folded leaflet or pamphlet used to attract attention to your organization, usually for PR purposes. You can highlight a particular positive impact statement from an evaluation in a brochure, for example.

Newsletter front page with title section, three content sections and an image of people using a device to collect data

Source: ("Zambia National Malaria Centre," 2009)

Advice for choosing this method

  • These short communications are very attractive and useful when attending an event, workshop or conference, to provide a snapshot of information about evaluation results and your organization
  • A newsletter often has a list of interested subscribers and can be the option chosen when you need to inform that particular audience about ongoing developments. 
  • Whichever format you choose depends on the resources available, and whether your audience has access to the internet or not. 

Advice for using this method

Regardless of the option chosen, they should be visually attractive and easy to read through the use of different colours, layouts, photographs, and varied headings and graphics. 

This blog from Oxfam Australia provides a clear example of the use and construction of a blog.

This brochure from Save the Children USA is a good example of a brochure.

This website allows you to download a range of newsletter templates for both print and web.

This tool allows the user to design email newsletters and then distribute and integrate them with their current social media and communication networks. 

  • Stetson, V. Catholic Relief Services and American Red Cross, (2008).  Communicating and reporting on an evaluation: Guidelines and tools . Retrieved from website:  (archived link)
  • Zambia National Malaria Control Centre, (2009).  Zambia national malaria control centre monitoring and evaluation newsletter. Retrieved from website:  (archived link)

'Newsletters, bulletins, blogs, briefs and brochures' is referenced in:


  • Rainbow Framework :  Develop reporting media
  • Website communications

Back to top

© 2022 BetterEvaluation. All right reserved.

13.5 Research Process: Making Notes, Synthesizing Information, and Keeping a Research Log

Learning outcomes.

By the end of this section, you will be able to:

  • Employ the methods and technologies commonly used for research and communication within various fields.
  • Practice and apply strategies such as interpretation, synthesis, response, and critique to compose texts that integrate the writer’s ideas with those from appropriate sources.
  • Analyze and make informed decisions about intellectual property based on the concepts that motivate them.
  • Apply citation conventions systematically.

As you conduct research, you will work with a range of “texts” in various forms, including sources and documents from online databases as well as images, audio, and video files from the Internet. You may also work with archival materials and with transcribed and analyzed primary data. Additionally, you will be taking notes and recording quotations from secondary sources as you find materials that shape your understanding of your topic and, at the same time, provide you with facts and perspectives. You also may download articles as PDFs that you then annotate. Like many other students, you may find it challenging to keep so much material organized, accessible, and easy to work with while you write a major research paper. As it does for many of those students, a research log for your ideas and sources will help you keep track of the scope, purpose, and possibilities of any research project.

A research log is essentially a journal in which you collect information, ask questions, and monitor the results. Even if you are completing the annotated bibliography for Writing Process: Informing and Analyzing , keeping a research log is an effective organizational tool. Like Lily Tran’s research log entry, most entries have three parts: a part for notes on secondary sources, a part for connections to the thesis or main points, and a part for your own notes or questions. Record source notes by date, and allow room to add cross-references to other entries.

Summary of Assignment: Research Log

Your assignment is to create a research log similar to the student model. You will use it for the argumentative research project assigned in Writing Process: Integrating Research to record all secondary source information: your notes, complete publication data, relation to thesis, and other information as indicated in the right-hand column of the sample entry.

Another Lens. A somewhat different approach to maintaining a research log is to customize it to your needs or preferences. You can apply shading or color coding to headers, rows, and/or columns in the three-column format (for colors and shading). Or you can add columns to accommodate more information, analysis, synthesis, or commentary, formatting them as you wish. Consider adding a column for questions only or one for connections to other sources. Finally, consider a different visual format , such as one without columns. Another possibility is to record some of your comments and questions so that you have an aural rather than a written record of these.

Writing Center

At this point, or at any other point during the research and writing process, you may find that your school’s writing center can provide extensive assistance. If you are unfamiliar with the writing center, now is a good time to pay your first visit. Writing centers provide free peer tutoring for all types and phases of writing. Discussing your research with a trained writing center tutor can help you clarify, analyze, and connect ideas as well as provide feedback on works in progress.

Quick Launch: Beginning Questions

You may begin your research log with some open pages in which you freewrite, exploring answers to the following questions. Although you generally would do this at the beginning, it is a process to which you likely will return as you find more information about your topic and as your focus changes, as it may during the course of your research.

  • What information have I found so far?
  • What do I still need to find?
  • Where am I most likely to find it?

These are beginning questions. Like Lily Tran, however, you will come across general questions or issues that a quick note or freewrite may help you resolve. The key to this section is to revisit it regularly. Written answers to these and other self-generated questions in your log clarify your tasks as you go along, helping you articulate ideas and examine supporting evidence critically. As you move further into the process, consider answering the following questions in your freewrite:

  • What evidence looks as though it best supports my thesis?
  • What evidence challenges my working thesis?
  • How is my thesis changing from where it started?

Creating the Research Log

As you gather source material for your argumentative research paper, keep in mind that the research is intended to support original thinking. That is, you are not writing an informational report in which you simply supply facts to readers. Instead, you are writing to support a thesis that shows original thinking, and you are collecting and incorporating research into your paper to support that thinking. Therefore, a research log, whether digital or handwritten, is a great way to keep track of your thinking as well as your notes and bibliographic information.

In the model below, Lily Tran records the correct MLA bibliographic citation for the source. Then, she records a note and includes the in-text citation here to avoid having to retrieve this information later. Perhaps most important, Tran records why she noted this information—how it supports her thesis: The human race must turn to sustainable food systems that provide healthy diets with minimal environmental impact, starting now . Finally, she makes a note to herself about an additional visual to include in the final paper to reinforce the point regarding the current pressure on food systems. And she connects the information to other information she finds, thus cross-referencing and establishing a possible synthesis. Use a format similar to that in Table 13.4 to begin your own research log.

Types of Research Notes

Taking good notes will make the research process easier by enabling you to locate and remember sources and use them effectively. While some research projects requiring only a few sources may seem easily tracked, research projects requiring more than a few sources are more effectively managed when you take good bibliographic and informational notes. As you gather evidence for your argumentative research paper, follow the descriptions and the electronic model to record your notes. You can combine these with your research log, or you can use the research log for secondary sources and your own note-taking system for primary sources if a division of this kind is helpful. Either way, be sure to include all necessary information.

Bibliographic Notes

These identify the source you are using. When you locate a useful source, record the information necessary to find that source again. It is important to do this as you find each source, even before taking notes from it. If you create bibliographic notes as you go along, then you can easily arrange them in alphabetical order later to prepare the reference list required at the end of formal academic papers. If your instructor requires you to use MLA formatting for your essay, be sure to record the following information:

  • Title of source
  • Title of container (larger work in which source is included)
  • Other contributors
  • Publication date

When using MLA style with online sources, also record the following information:

  • Date of original publication
  • Date of access
  • DOI (A DOI, or digital object identifier, is a series of digits and letters that leads to the location of an online source. Articles in journals are often assigned DOIs to ensure that the source can be located, even if the URL changes. If your source is listed with a DOI, use that instead of a URL.)

It is important to understand which documentation style your instructor will require you to use. Check the Handbook for MLA Documentation and Format and APA Documentation and Format styles . In addition, you can check the style guide information provided by the Purdue Online Writing Lab .

Informational Notes

These notes record the relevant information found in your sources. When writing your essay, you will work from these notes, so be sure they contain all the information you need from every source you intend to use. Also try to focus your notes on your research question so that their relevance is clear when you read them later. To avoid confusion, work with separate entries for each piece of information recorded. At the top of each entry, identify the source through brief bibliographic identification (author and title), and note the page numbers on which the information appears. Also helpful is to add personal notes, including ideas for possible use of the information or cross-references to other information. As noted in Writing Process: Integrating Research , you will be using a variety of formats when borrowing from sources. Below is a quick review of these formats in terms of note-taking processes. By clarifying whether you are quoting directly, paraphrasing, or summarizing during these stages, you can record information accurately and thus take steps to avoid plagiarism.

Direct Quotations, Paraphrases, and Summaries

A direct quotation is an exact duplication of the author’s words as they appear in the original source. In your notes, put quotation marks around direct quotations so that you remember these words are the author’s, not yours. One advantage of copying exact quotations is that it allows you to decide later whether to include a quotation, paraphrase, or summary. ln general, though, use direct quotations only when the author’s words are particularly lively or persuasive.

A paraphrase is a restatement of the author’s words in your own words. Paraphrase to simplify or clarify the original author’s point. In your notes, use paraphrases when you need to record details but not exact words.

A summary is a brief condensation or distillation of the main point and most important details of the original source. Write a summary in your own words, with facts and ideas accurately represented. A summary is useful when specific details in the source are unimportant or irrelevant to your research question. You may find you can summarize several paragraphs or even an entire article or chapter in just a few sentences without losing useful information. It is a good idea to note when your entry contains a summary to remind you later that it omits detailed information. See Writing Process Integrating Research for more detailed information and examples of quotations, paraphrases, and summaries and when to use them.

Other Systems for Organizing Research Logs and Digital Note-Taking

Students often become frustrated and at times overwhelmed by the quantity of materials to be managed in the research process. If this is your first time working with both primary and secondary sources, finding ways to keep all of the information in one place and well organized is essential.

Because gathering primary evidence may be a relatively new practice, this section is designed to help you navigate the process. As mentioned earlier, information gathered in fieldwork is not cataloged, organized, indexed, or shelved for your convenience. Obtaining it requires diligence, energy, and planning. Online resources can assist you with keeping a research log. Your college library may have subscriptions to tools such as Todoist or EndNote. Consult with a librarian to find out whether you have access to any of these. If not, use something like the template shown in Figure 13.8 , or another like it, as a template for creating your own research notes and organizational tool. You will need to have a record of all field research data as well as the research log for all secondary sources.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at
  • Authors: Michelle Bachelor Robinson, Maria Jerskey, featuring Toby Fulwiler
  • Publisher/website: OpenStax
  • Book title: Writing Guide with Handbook
  • Publication date: Dec 21, 2021
  • Location: Houston, Texas
  • Book URL:
  • Section URL:

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Writing a progress/status report

By michael ernst, january, 2010.

Writing a weekly report about your research progress can make your research more successful, less frustrating, and more visible to others, among other benefits.

One good format is to write your report in four parts:

  • Quote the previous week's plan. This helps you determine whether you accomplished your goals.
  • State this week's progress. This can include information such as: what you have accomplished, what you learned, what difficulties you overcame, what difficulties are still blocking you, your new ideas for research directions or projects, and the like.
  • Give the next week's plan. A good format is a bulleted list, so we can see what you accomplished or did not. Try to make each goal measurable: there should be no ambiguity as to whether you were able to finish it. It's good to include longer-term goals as well.
  • Give an agenda for the meeting. Some people like to send this as a separate message, which is fine.

The report need not be onerous. It can be a few paragraphs or a page, so it shouldn't take you long to write. Minimize details that are not relevant to your audience, such as classwork and the like, in order to keep the report focused; you will spend less time writing it, and make it more likely to be read.

Writing the progress report has many benefits.

Writing the report will make you more productive, because it will force you to think about your work in a manner concretely enough to write down. Any time that you spend organizing your thoughts will more than pay itself back in better understanding and improved productivity. When a project is complete, it is all too easy to forget some of your contributions. You can look back over your progress reports to remember what was difficult, and to think about how to work more productively in the future. You may be able to re-use some of the text when writing up your results.

Writing the report will make your meetings more productive. When you have a weekly research meeting, the report should be sent 24 hours in advance, to help everyone prepare. (Two hours is not an acceptable alternative: it does not let everyone — both you and others — mull over the ideas.) Don't delay your report because you want to wait until you have better results to report. Instead, send the report on schedule, and if you get more results in the next 24 hours, you can discuss those at the meeting.

Writing the report will give you feedback from a new point of view. The report enables others outside your research project to know what you are doing. Those people may respond with ideas or suggestions, which can help get you unstuck or give you additional avenues to explore. It also keeps you on their radar screen and reminds them of your work, which a good thing if you don't meet with them frequently. (For PhD students, a periodic report to the members of your thesis committee can pay big dividends.)

Writing the report helps explain (to yourself especially, but also to others) how you spent your time — even if there isn't as much progress as you would have preferred, you can see that you did work hard, and how to be more efficient or effective in the future.

If your meetings are more frequent than weekly, then the progress report should also be more frequent. If your meetings are less frequent, it's a good idea to still send a progress report each week.

Important tip: Throughout the day, maintain a log of what you have done. This can be a simple text file. You can update it when you start and end a task, or at regular intervals throughout the day. It takes only a moment to maintain the log, and it makes writing the report easy. By contrast, without a log you might forget what you have done during the week, and writing the report could take a long time.

Back to Advice compiled by Michael Ernst .

Bulletin of the National Research Centre Cover Image

Preparing your manuscript

The title page should:

  • present a title that includes, if appropriate, the study design
  • if a collaboration group should be listed as an author, please list the Group name as an author. If you would like the names of the individual members of the Group to be searchable through their individual PubMed records, please include this information in the “Acknowledgements” section in accordance with the instructions below
  • Large Language Models (LLMs), such as ChatGPT , do not currently satisfy our authorship criteria . Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript
  • indicate the corresponding author

The abstract should not exceed 350 words. Please minimize the use of abbreviations and do not cite references in the abstract. The abstract must include the following separate sections:

  • Background : the context and purpose of the study
  • Results : the main findings
  • Conclusions : a brief summary and potential implications

Three to ten keywords representing the main content of the article.

The Background section should explain the background to the study, its aims, a summary of the existing literature and why this study was necessary.

This should include the findings of the study including, if appropriate, results of statistical analysis which must be included either in the text or as tables and figures.

For research articles this section should discuss the implications of the findings in context of existing research and highlight limitations of the study. For study protocols and methodology manuscripts this section should include a discussion of any practical or operational issues involved in performing the study and any issues not covered in other sections.


This should state clearly the main conclusions and provide an explanation of the importance and relevance of the study to the field.

Methods (can also be placed after Background)

The methods section should include:

  • the aim, design and setting of the study
  • the characteristics of participants or description of materials
  • a clear description of all processes, interventions and comparisons. Generic names should generally be used. When proprietary brands are used in research, include the brand names in parentheses
  • the type of statistical analysis used, including a power calculation if appropriate

List of abbreviations

If abbreviations are used in the text they should be defined in the text at first use, and a list of abbreviations should be provided.


All manuscripts must contain the following sections under the heading 'Declarations':

Ethics approval and consent to participate

Consent for publication.

  • Availability of data and material

Competing interests

Authors' contributions, acknowledgements.

  • Authors' information (optional)

Please see below for details on the information to be included in these sections.

If any of the sections are not relevant to your manuscript, please include the heading and write 'Not applicable' for that section.

Manuscripts reporting studies involving human participants, human data or human tissue must:

  • include a statement on ethics approval and consent (even where the need for approval was waived)
  • include the name of the ethics committee that approved the study and the committee’s reference number if appropriate

Studies involving animals must include a statement on ethics approval.

See our  editorial policies  for more information.

If your manuscript does not report on or involve the use of any animal or human data or tissue, please state “Not applicable” in this section.

If your manuscript contains any individual person’s data in any form (including individual details, images or videos), consent to publish must be obtained from that person, or in the case of children, their parent or legal guardian. All presentations of case reports must have consent to publish.

You can use your institutional consent form if you prefer. You should not send the form to us on submission, but we may request to see a copy at any stage (including after publication).

See our  editorial policies  for more information on consent for publication.

If your manuscript does not contain data from any individual person, please state “Not applicable” in this section.

Availability of data and materials

All manuscripts must include an ‘Availability of data and materials’ statement. Data availability statements should include information on where data supporting the results reported in the article can be found including, where applicable, hyperlinks to publicly archived datasets analysed or generated during the study. By data we mean the minimal dataset that would be necessary to interpret, replicate and build upon the findings reported in the article. We recognise it is not always possible to share research data publicly, for instance when individual privacy could be compromised, and in such instances data availability should still be stated in the manuscript along with any conditions for access.

Data availability statements can take one of the following forms (or a combination of more than one if required for multiple datasets):

  • The datasets generated and/or analysed during the current study are available in the [NAME] repository, [PERSISTENT WEB LINK TO DATASETS]
  • The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
  • All data generated or analysed during this study are included in this published article [and its supplementary information files].
  • The datasets generated and/or analysed during the current study are not publicly available due [REASON WHY DATA ARE NOT PUBLIC] but are available from the corresponding author on reasonable request.
  • Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
  • The data that support the findings of this study are available from [third party name] but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of [third party name].
  • Not applicable. If your manuscript does not contain any data, please state 'Not applicable' in this section.

More examples of template data availability statements, which include examples of openly available and restricted access datasets, are available  here .

SpringerOpen  also requires that authors cite any publicly available data on which the conclusions of the paper rely in the manuscript. Data citations should include a persistent identifier (such as a DOI) and should ideally be included in the reference list. Citations of datasets, when they appear in the reference list, should include the minimum information recommended by DataCite and follow journal style. Dataset identifiers including DOIs should be expressed as full URLs. For example:

Hao Z, AghaKouchak A, Nakhjiri N, Farahmand A. Global integrated drought monitoring and prediction system (GIDMaPS) data sets. figshare. 2014.

With the corresponding text in the Availability of data and materials statement:

The datasets generated during and/or analysed during the current study are available in the [NAME] repository, [PERSISTENT WEB LINK TO DATASETS]. [Reference number]

All financial and non-financial competing interests must be declared in this section.

See our  editorial policies  for a full explanation of competing interests. If you are unsure whether you or any of your co-authors have a competing interest please contact the editorial office.

Please use the authors’ initials to refer to each authors' competing interests in this section.

If you do not have any competing interests, please state "The authors declare that they have no competing interests" in this section.

All sources of funding for the research reported should be declared. The role of the funding body in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript should be declared.

The individual contributions of authors to the manuscript should be specified in this section. Guidance and criteria for authorship can be found in our  editorial policies .

Please use initials to refer to each author's contribution in this section, for example: "FC analyzed and interpreted the patient data regarding the hematological disease and the transplant. RH performed the histological examination of the kidney, and was a major contributor in writing the manuscript. All authors read and approved the final manuscript."

Please acknowledge anyone who contributed towards the article who does not meet the criteria for authorship including anyone who provided professional writing services or materials.

Authors should obtain permission to acknowledge from all those mentioned in the Acknowledgements section.

See our  editorial policies  for a full explanation of acknowledgements and authorship criteria.

If you do not have anyone to acknowledge, please write "Not applicable" in this section.

Group authorship (for manuscripts involving a collaboration group): if you would like the names of the individual members of a collaboration Group to be searchable through their individual PubMed records, please ensure that the title of the collaboration Group is included on the title page and in the submission system and also include collaborating author names as the last paragraph of the “Acknowledgements” section. Please add authors in the format First Name, Middle initial(s) (optional), Last Name. You can add institution or country information for each author if you wish, but this should be consistent across all authors.

Please note that individual names may not be present in the PubMed record at the time a published article is initially included in PubMed as it takes PubMed additional time to code this information.

Authors' information

This section is optional.

You may choose to use this section to include any relevant information about the author(s) that may aid the reader's interpretation of the article, and understand the standpoint of the author(s). This may include details about the authors' qualifications, current positions they hold at institutions or societies, or any other relevant background information. Please refer to authors using their initials. Note this section should not be used to describe any competing interests.

Footnotes should be designated within the text using a superscript number. It is not allowed to use footnotes for references/citations.

Examples of the Basic Springer reference style are shown below. 

See our editorial policies for author guidance on good citation practice.

Web links and URLs: All web links and URLs, including links to the authors' own websites, should be given a reference number and included in the reference list rather than within the text of the manuscript. They should be provided in full, including both the title of the site and the URL, as well as the date the site was accessed, in the following format: The Mouse Tumor Biology Database. . Accessed 20 May 2013. If an author or group of authors can clearly be associated with a web link, such as for weblogs, then they should be included in the reference.

Example reference style:

Article within a journal

Smith J, Jones M Jr, Houghton L (1999) Future of health insurance. N Engl J Med 965:325-329.

Article by DOI (with page numbers)

Slifka MK, Whitton JL (2000) Clinical implications of dysregulated cytokine production. J Mol Med 78:74-80. doi:10.1007/s001090000086.

Article by DOI (before issue publication and with page numbers)

Slifka MK, Whitton JL (2000) Clinical implications of dysregulated cytokine production. J Mol Med. doi:10.1007/s001090000086.

Article in electronic journal by DOI (no paginated version)

Slifka MK, Whitton JL (2000) Clinical implications of dysregulated cytokine production. Dig J Mol Med. doi:10.1007/s801090000086.

Journal issue with issue editor

Smith J (ed) (1998) Rodent genes. Mod Genomics J 14(6):126-233.

Journal issue with no issue editor

Mod Genomics J (1998) Rodent genes. Mod Genomics J 14(6):126-233.

Book chapter, or an article within a book

Brown B, Aaron M (2001) The politics of nature. In: Smith J (ed) The rise of modern genomics, 3rd edn. Wiley, New York.

Complete book, authored

South J, Blass B (2001) The future of modern genomics. Blackwell, London.

Complete book, edited

Smith J, Brown B (eds) (2001) The demise of modern genomics. Blackwell, London.

Complete book, also showing a translated edition [Either edition may be listed first.]

Adorno TW (1966) Negative Dialektik. Suhrkamp, Frankfurt. English edition: Adorno TW (1973) Negative Dialectics (trans: Ashton EB). Routledge, London.

Chapter in a book in a series without volume titles

Schmidt H (1989) Testing results. In: Hutzinger O (ed) Handbook of environmental chemistry, vol 2E. Springer, Heidelberg, p 111.

Chapter in a book in a series with volume titles

Smith SE (1976) Neuromuscular blocking drugs in man. In: Zaimis E (ed) Neuromuscular junction. Handbook of experimental pharmacology, vol 42. Springer, Heidelberg, pp 593-660.

OnlineFirst chapter in a series (without a volume designation but with a DOI)

Saito, Yukio, and Hyuga, Hiroyuki. (2007) Rate equation approaches to amplification of enantiomeric excess and chiral symmetry breaking. Topics in Current Chemistry. doi:10.1007/128_2006_108.

Proceedings as a book (in a series and subseries)

Zowghi D (1996) A framework for reasoning about requirements in evolution. In: Foo N, Goebel R (eds) PRICAI'96: topics in artificial intelligence. 4th Pacific Rim conference on artificial intelligence, Cairns, August 1996. Lecture notes in computer science (Lecture notes in artificial intelligence), vol 1114. Springer, Heidelberg, p 157.

Article within conference proceedings with an editor (without a publisher)

Aaron M (1999) The future of genomics. In: Williams H (ed) Proceedings of the genomic researchers, Boston, 1999.

Article within conference proceedings without an editor (without a publisher)

Chung S-T, Morris RL (1978) Isolation and characterization of plasmid deoxyribonucleic acid from Streptomyces fradiae. In: Abstracts of the 3rd international symposium on the genetics of industrial microorganisms, University of Wisconsin, Madison, 4-9 June 1978.

Article presented at a conference

Chung S-T, Morris RL (1978) Isolation and characterization of plasmid deoxyribonucleic acid from Streptomyces fradiae. Paper presented at the 3rd international symposium on the genetics of industrial microorganisms, University of Wisconsin, Madison, 4-9 June 1978.

Norman LO (1998) Lightning rods. US Patent 4,379,752, 9 Sept 1998.


Trent JW (1975) Experimental acute renal failure. Dissertation, University of California.

Book with institutional author

International Anatomical Nomenclature Committee (1966) Nomina anatomica. Excerpta Medica, Amsterdam.

In press article

Major M (2007) Recent developments. In: Jones W (ed) Surgery today. Springer, Dordrecht (in press).  

Online document

Doe J (1999) Title of subordinate document. In: The dictionary of substances and their effects. Royal Society of Chemistry. Available via DIALOG. of subordinate document. Accessed 15 Jan 1999.

Online database

Healthwise Knowledgebase (1998) US Pharmacopeia, Rockville. Accessed 21 Sept 1998.

Supplementary material/private homepage

Doe J (2000) Title of supplementary material. Accessed 22 Feb 2000.

University site

Doe J (1999) Title of preprint. Accessed 25 Dec 1999.

Doe J (1999) Trivial HTTP, RFC2169. Accessed 12 Nov 1999.

Organization site

ISSN International Centre (2006) The ISSN register. Accessed 20 Feb 2007.

General formatting information

Manuscripts must be written in concise English. For help on scientific writing, or preparing your manuscript in English, please see Springer's  Author Academy .

Quick points:

  • Use double line spacing
  • Include line and page numbering
  • Use SI units: Please ensure that all special characters used are embedded in the text, otherwise they will be lost during conversion to PDF
  • Do not use page breaks in your manuscript

File formats

The following word processor file formats are acceptable for the main manuscript document:

  • Microsoft word (DOC, DOCX)
  • Rich text format (RTF)
  • TeX/LaTeX 

Please note: editable files are required for processing in production. If your manuscript contains any non-editable files (such as PDFs) you will be required to re-submit an editable file if your manuscript is accepted.

For more information, see ' Preparing figures ' below.

Additional information for TeX/LaTeX users

You are encouraged to use the Springer Nature LaTeX template when preparing a submission. A PDF of your manuscript files will be compiled during submission using pdfLaTeX and TexLive 2021. All relevant editable source files must be uploaded during the submission process. Failing to submit these source files will cause unnecessary delays in the production process.  

Style and language

For editors and reviewers to accurately assess the work presented in your manuscript you need to ensure the English language is of sufficient quality to be understood. If you need help with writing in English you should consider:

  • Getting a fast, free online grammar check .
  • Visiting the English language tutorial which covers the common mistakes when writing in English.
  • Asking a colleague who is proficient in English to review your manuscript for clarity.
  • Using a professional language editing service where editors will improve the English to ensure that your meaning is clear and identify problems that require your review. Two such services are provided by our affiliates Nature Research Editing Service and American Journal Experts . SpringerOpen authors are entitled to a 10% discount on their first submission to either of these services. To claim 10% off English editing from Nature Research Editing Service, click here . To claim 10% off American Journal Experts, click here .

Please note that the use of a language editing service is not a requirement for publication in Bulletin of the National Research Centre and does not imply or guarantee that the article will be selected for peer review or accepted.  为便于编辑和评审专家准确评估您稿件中陈述的研究工作,您需要确保文稿英语语言质量足以令人理解。如果您需要英文写作方面的帮助,您可以考虑:

  • 获取快速、免费的在线  语法检查 。
  • 查看一些有关英语写作中常见语言错误的 教程 。
  • 请一位以英语为母语的同事审阅您的稿件是否表意清晰。
  • 使用专业语言编辑服务,编辑人员会对英语进行润色,以确保您的意思表达清晰,并提出需要您复核的问题。例如我们的附属机构 Nature Research Editing Service 以及合作伙伴 American Journal Experts 都可以提供此类专业服务。SpringerOpen作者享受首次订单10%优惠,该优惠同时适用于两家公司。您只需点击以下链接即可开始。使用 Nature Research Editing Service的编辑润色10%的优惠服务,请点击 这里 。使用 American Journal Experts的10%优惠服务,请点击 这里 。

请注意,使用语言编辑服务并非在期刊上发表文章的必要条件,这也并不意味或保证文章将被选中进行同行评议或被接受。 エディターと査読者があなたの論文を正しく評価するには、使用されている英語の質が十分であることが必要とされます。英語での論文執筆に際してサポートが必要な場合には、次のオプションがあります:

  • 高速なオンライン  文法チェック  を無料で受ける。
  • 英語で執筆する際のよくある間違いに関する 英語のチュートリアル を参照する。
  • 英語を母国語とする同僚に、原稿内の英語が明確であるかをチェックしてもらう。
  • プロの英文校正サービスを利用する。校正者が原稿の意味を明確にしたり、問題点を指摘し、英語を向上させます。 Nature Research Editing Service と American Journal Experts の2つは弊社と提携しているサービスです。SpringerOpenのジャーナルの著者は、いずれかのサービスを初めて利用する際に、10%の割引を受けることができます。Nature Research Editing Serviceの10%割引を受けるには、 こちらをクリックしてください 。. American Journal Expertsの10%割引を受けるには、 こちらをクリックしてください 。

英文校正サービスの利用は、このジャーナルに掲載されるための条件ではないこと、また論文審査や受理を保証するものではないことに留意してください。 영어 원고의 경우, 에디터 및 리뷰어들이 귀하의 원고에 실린 결과물을 정확하게 평가할 수 있도록, 그들이 충분히 이해할 수 있을 만한 수준으로 작성되어야 합니다. 만약 영작문과 관련하여 도움을 받기를 원하신다면 다음의 사항들을 고려하여 주십시오:

  • 영어 튜토리얼 페이지 에 방문하여 영어로 글을 쓸 때 자주하는 실수들을 확인합니다.
  • 귀하의 원고의 표현을 명확히 해줄 영어 원어민 동료를 찾아서 리뷰를 의뢰합니다
  • 리뷰에 대비하여, 원고의 의미를 명확하게 해주고 리뷰에서 요구하는 문제점들을 식별해서 영문 수준을 향상시켜주는 전문 영문 교정 서비스를 이용합니다. Nature Research Editing Service 와 American Journal Experts 에서 저희와 협약을 통해 서비스를 제공하고 있습니다. SpringerOpen에서는 위의 두 가지의 서비스를 첫 논문 투고를 위해 사용하시는 경우, 10%의 할인을 제공하고 있습니다. Nature Research Editing Service이용시 10% 할인을 요청하기 위해서는 여기 를 클릭해 주시고, American Journal Experts 이용시 10% 할인을 요청하기 위해서는 여기 를 클릭해 주십시오.

영문 교정 서비스는 게재를 위한 요구사항은 아니며, 해당 서비스의 이용이 피어 리뷰에 논문이 선택되거나 게재가 수락되는 것을 의미하거나 보장하지 않습니다.

Data and materials

For all journals, SpringerOpen strongly encourages all datasets on which the conclusions of the manuscript rely to be either deposited in publicly available repositories (where available and appropriate) or presented in the main paper or additional supporting files, in machine-readable format (such as spread sheets rather than PDFs) whenever possible. Please see the list of recommended repositories in our editorial policies.

For some journals, deposition of the data on which the conclusions of the manuscript rely is an absolute requirement. Please check the Instructions for Authors for the relevant journal and article type for journal specific policies.

For all manuscripts, information about data availability should be detailed in an ‘Availability of data and materials’ section. For more information on the content of this section, please see the Declarations section of the relevant journal’s Instruction for Authors. For more information on SpringerOpen's policies on data availability, please see our editorial policies .

Formatting the 'Availability of data and materials' section of your manuscript

The following format for the 'Availability of data and materials section of your manuscript should be used:

"The dataset(s) supporting the conclusions of this article is(are) available in the [repository name] repository, [unique persistent identifier and hyperlink to dataset(s) in http:// format]."

The following format is required when data are included as additional files:

"The dataset(s) supporting the conclusions of this article is(are) included within the article (and its additional file(s))."

For databases, this section should state the web/ftp address at which the database is available and any restrictions to its use by non-academics.

For software, this section should include:

  • Project name: e.g. My bioinformatics project
  • Project home page: e.g.
  • Archived version: DOI or unique identifier of archived software or code in repository (e.g. enodo)
  • Operating system(s): e.g. Platform independent
  • Programming language: e.g. Java
  • Other requirements: e.g. Java 1.3.1 or higher, Tomcat 4.0 or higher
  • License: e.g. GNU GPL, FreeBSD etc.
  • Any restrictions to use by non-academics: e.g. licence needed

Information on available repositories for other types of scientific data, including clinical data, can be found in our editorial policies .

What should be cited?

Only articles, clinical trial registration records and abstracts that have been published or are in press, or are available through public e-print/preprint servers, may be cited.

Unpublished abstracts, unpublished data and personal communications should not be included in the reference list, but may be included in the text and referred to as "unpublished observations" or "personal communications" giving the names of the involved researchers. Obtaining permission to quote personal communications and unpublished data from the cited colleagues is the responsibility of the author. Either footnotes or endnotes are permitted. Journal abbreviations follow Index Medicus/MEDLINE.

Any in press articles cited within the references and necessary for the reviewers' assessment of the manuscript should be made available if requested by the editorial office.

Preparing figures

When preparing figures, please follow the formatting instructions below.

  • Figure titles (max 15 words) and legends (max 300 words) should be provided in the main manuscript, not in the graphic file.
  • Tables should NOT be submitted as figures but should be included in the main manuscript file.
  • Multi-panel figures (those with parts a, b, c, d etc.) should be submitted as a single composite file that contains all parts of the figure.
  • Figures should be numbered in the order they are first mentioned in the text, and uploaded in this order.
  • Figures should be uploaded in the correct orientation.
  • Figure keys should be incorporated into the graphic, not into the legend of the figure.
  • Each figure should be closely cropped to minimize the amount of white space surrounding the illustration. Cropping figures improves accuracy when placing the figure in combination with other elements when the accepted manuscript is prepared for publication on our site. For more information on individual figure file formats, see our detailed instructions.
  • Individual figure files should not exceed 10 MB. If a suitable format is chosen, this file size is adequate for extremely high quality figures.
  • Please note that it is the responsibility of the author(s) to obtain permission from the copyright holder to reproduce figures (or tables) that have previously been published elsewhere. In order for all figures to be open access, authors must have permission from the rights holder if they wish to include images that have been published elsewhere in non open access journals. Permission should be indicated in the figure legend, and the original source included in the reference list.

Figure file types

We accept the following file formats for figures:

  • EPS (suitable for diagrams and/or images)
  • PDF (suitable for diagrams and/or images)
  • Microsoft Word (suitable for diagrams and/or images, figures must be a single page)
  • PowerPoint (suitable for diagrams and/or images, figures must be a single page)
  • TIFF (suitable for images)
  • JPEG (suitable for photographic images, less suitable for graphical images)
  • PNG (suitable for images)
  • BMP (suitable for images)
  • CDX (ChemDraw - suitable for molecular structures)

Figure size and resolution

Figures are resized during publication of the final full text and PDF versions to conform to the SpringerOpen standard dimensions, which are detailed below.

Figures on the web:

  • width of 600 pixels (standard), 1200 pixels (high resolution).

Figures in the final PDF version:

  • width of 85 mm for half page width figure
  • width of 170 mm for full page width figure
  • maximum height of 225 mm for figure and legend
  • image resolution of approximately 300 dpi (dots per inch) at the final size

Figures should be designed such that all information, including text, is legible at these dimensions. All lines should be wider than 0.25 pt when constrained to standard figure widths. All fonts must be embedded.

Figure file compression

Vector figures should if possible be submitted as PDF files, which are usually more compact than EPS files.

  • TIFF files should be saved with LZW compression, which is lossless (decreases file size without decreasing quality) in order to minimize upload time.
  • JPEG files should be saved at maximum quality.
  • Conversion of images between file types (especially lossy formats such as JPEG) should be kept to a minimum to avoid degradation of quality.

If you have any questions or are experiencing a problem with figures, please contact the customer service team at [email protected] .

Preparing tables

When preparing tables, please follow the formatting instructions below.

  • Tables should be numbered and cited in the text in sequence using Arabic numerals (i.e. Table 1, Table 2 etc.).
  • Tables less than one A4 or Letter page in length can be placed in the appropriate location within the manuscript.
  • Tables larger than one A4 or Letter page in length can be placed at the end of the document text file. Please cite and indicate where the table should appear at the relevant location in the text file so that the table can be added in the correct place during production.
  • Larger datasets, or tables too wide for A4 or Letter landscape page can be uploaded as additional files. Please see [below] for more information.
  • Tabular data provided as additional files can be uploaded as an Excel spreadsheet (.xls ) or comma separated values (.csv). Please use the standard file extensions.
  • Table titles (max 15 words) should be included above the table, and legends (max 300 words) should be included underneath the table.
  • Tables should not be embedded as figures or spreadsheet files, but should be formatted using ‘Table object’ function in your word processing program.
  • Color and shading may not be used. Parts of the table can be highlighted using superscript, numbering, lettering, symbols or bold text, the meaning of which should be explained in a table legend.
  • Commas should not be used to indicate numerical values.

If you have any questions or are experiencing a problem with tables, please contact the customer service team at [email protected] .

Preparing additional files

As the length and quantity of data is not restricted for many article types, authors can provide datasets, tables, movies, or other information as additional files.

All Additional files will be published along with the accepted article. Do not include files such as patient consent forms, certificates of language editing, or revised versions of the main manuscript document with tracked changes. Such files, if requested, should be sent by email to the journal’s editorial email address, quoting the manuscript reference number.

Results that would otherwise be indicated as "data not shown" should be included as additional files. Since many web links and URLs rapidly become broken, SpringerOpen requires that supporting data are included as additional files, or deposited in a recognized repository. Please do not link to data on a personal/departmental website. Do not include any individual participant details. The maximum file size for additional files is 20 MB each, and files will be virus-scanned on submission. Each additional file should be cited in sequence within the main body of text.

Submit manuscript

  • Editorial Board
  • Sign up for article alerts and news from this journal

Associated Society

New Content Item

National Research Centre

The NRC has to correspond to the country's key production and services sectors through the research conducted in different areas of science and technology, scientific consultation and training as well.

NRC Mission

The NRC mission is to conduct basic and applied research within the major fields of interest in order to develop production and services sectors.

Annual Journal Metrics

2023 Speed 14 days submission to first editorial decision for all manuscripts (Median) 67 days submission to accept (Median)

2023 Usage  1,233,773 downloads 465 Altmetric mentions 

Egyptian Knowledge Bank (EKB) Journals

New Content Item (1)

Visit our collection of Egyptian journals.

  • ISSN: 2522-8307 (electronic)

Logo for University System of New Hampshire Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.


You write a progress report to inform a supervisor, associate, or client about progress you have made on a project over a specific period of time. Periodic progress reports are common on projects that go on for several months (or more). Whoever is paying for this project wants to know whether tasks are being completed on schedule and on budget. If the project is not on schedule or on budget, they want to know why and what additional costs and time will be needed.

Progress reports answer the following questions for the reader:

  •  How much of the work is complete?
  • What part of the work is currently in progress?
  • What work remains to be done?
  • When and how will the remaining work be completed?
  • What changes, problems or unexpected issues, if any, have arisen?
  • How is the project going in general?

Purpose of a Progress Report

The main function of a progress report is persuasive:  to reassure clients and supervisors that you are making progress, that the project is going smoothly, and that it will be completed by the expected date — or to give reasons why any of those might not be the case. They also offer the opportunity to do the following:

  • Provide a brief look at preliminary findings or in-progress work on the project
  • Give your clients or supervisors a chance to evaluate your work on the project and to suggest or request changes
  • Give you a chance to discuss problems in the project and thus to forewarn the recipients
  • Force you to establish a work schedule, so that you will complete the project on time.

Format of a Progress Report

Depending on the size of the progress report, the length and importance of the project, and the recipient, a progress report can take forms ranging from a short informal conversation to a detailed, multi-paged report. Most commonly, progress reports are delivered in following forms:

  • Memo :  a short, semi-formal report to someone within your organization (can range in length from 1-4 pages)
  • Letter :  a short, semi-formal report sent to someone outside your organization
  • Formal report:  a long, formal report sent to someone within or outside of your organization
  • Presentation :  an oral presentation given directly to the target audience.

Organizational Patterns for Progress Reports

The recipient of a progress report wants to see what you’ve accomplished on the project, what you are working on now, what you plan to work on next, and how the project is going in general. The information is usually arranged with a focus either on time or on task, or a combination of the two:

  • Focus on time:   shows time period (previous, current, and future) and tasks completed or scheduled to be completed in each period
  • Focus on specific tasks:   shows order of tasks (defined milestones) and progress made in each time period
  • Focus on larger goals :  focus on the overall effect of what has been accomplished.

Information can also be arranged by report topic. You should refer to established milestones or deliverables outlined in your original proposal or job specifications. Whichever organizational strategy you choose, your report will likely contain the elements described below.

1. Introduction

Review the details of your project’s purpose, scope, and activities. The introduction may also contain the following:

  • date the project began; date the project is scheduled to be completed
  • people or organization working on the project
  • people or organization for whom the project is being done
  • overview of the contents of the progress report.

2. Project status

This section (which could have sub-sections) should give the reader a clear idea of the current status of your project. It should review the work completed, work in progress, and work remaining to be done on the project, organized into sub-sections by time, task, or topic. These sections might include

  • Direct reference to milestones or deliverables established in previous documents related to the project
  • Timeline for when remaining work will be completed
  • Any problems encountered or issues that have arisen that might affect completion, direction, requirements, or scope.

3.  Conclusion

The final section provides an overall assessment of the current state of the project and its expected completion, usually reassuring the reader that all is going well and on schedule. It can also alert recipients to unexpected changes in direction or scope, or problems in the project that may require intervention.

4.  References section if required.

Technical Writing Essentials Copyright © by Suzan Last and UNH College of Professional Studies Online is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Progress in Psychological Science. The Importance of Informed Ignorance and Curiosity-Driven Questions

  • Regular Article
  • Open access
  • Published: 25 May 2020
  • Volume 54 , pages 613–624, ( 2020 )

Cite this article

You have full access to this open access article

  • Lucas B Mazur   ORCID: 1 , 2  

3297 Accesses

5 Citations

Explore all metrics

In recent decades we have seen an exponential growth in the amount of data gathered within psychological research without a corresponding growth of theory that could meaningfully organize these research findings. For this reason, considerable attention today is given to discussions of such broader, higher-order concepts as theory and paradigm. However, another area important to consider is the nature of the questions psychologists are asking. Key to any discussion about the scientific status of psychology or about progress in the field (scientific or otherwise) is the nature of the questions that inspire psychological research. Psychologists concerned about scientific progress and the growth of theory in the field would be well served by more robust conversations about the nature of the questions being asked. Honest, curiosity-driven questions—questions that admit to our ignorance and that express an active and optimistic yearning for what we do not yet know—can help to propel psychology forward in a manner similar to the development of theory or paradigm. However, existing as it does in the “twilight zone” between the natural sciences and the humanities, psychology is fertile ground for questions of wide-ranging natures, and thus the nature of progress in the field can be variously understood, not all of which will be “scientific.” Recent psychological research in three areas (cognition, memory, and disorders/differences of sex development) are discussed as examples of how curiosity-driven questions being asked from a position of informed ignorance can lead to progress in the field.

Similar content being viewed by others

research in progress bulletin is an example of

What is Qualitative in Qualitative Research

Patrik Aspers & Ugo Corte

research in progress bulletin is an example of

Mixed methods research: what it is and what it could be

Rob Timans, Paul Wouters & Johan Heilbron

What is Qualitative in Research

Avoid common mistakes on your manuscript.


It is “the right question, asked the right way, rather than the accumulation of more data, that allows a field to progress.” (Firestein 2012 , p. 98)

In recent decades we have seen an exponential growth in the amount of data gathered within psychological research. Study after study is conducted to test various hypotheses, and yet in psychology at large there is little consensus regarding the core concepts in use, and there is a general absence of broader theories or overarching paradigms that would allow for the meaningful organization of research findings (Valsiner 2017 ; Zagaria et al. 2020 ). For this reason, considerable attention today is given to discussions of such broader or higher-order concepts as theory and paradigm, and rightly so. Within such discussions, some psychologists have argued that the field would be well served by adopting an inclusive, pluralistic, “meta-theory,” such as evolutionary psychology (Zagaria et al. 2020 ). The main aim of the current piece is to suggest that, before looking for unifying (“scientific”) theories, psychologists interested in the scientific progress of the field should look into the nature of the questions being asked and the degree to which those questions open the door to the possibility of scientific progress in the first place. We will examine some characteristics of the kinds of questions that allow for the possibility of scientific progress, and we will briefly look at some areas in psychology in which we see such questions being asked. Much psychological work—valid and valuable psychological work—is not scientific, and much good work would be difficult to link with any notion of progress in the field, scientific or otherwise. For many psychologists, and within a considerable amount of psychological work, the “soft” status of the field is not a concern. Thus, before we can assess the potential merits of various meta-theories, it would seem to be important that we first ask if we are generally studying psychology in a manner that is suggestive of a collective, scientific undertaking, and thus one in which inclusive, pluralistic, meta-theories would potentially be of use. By examining the nature of our questions, we can explore the degree to which the practices of psychologists point towards the possibility of various versions of scientific progress in the field, or perhaps in different, non-scientific directions.

As assessed along a number of metrics (e.g., the theories-to-laws ratio, the rate of consultation between researchers in the field, citation frequency of new researchers, citation concentration), psychology is considered a “soft science” relative to the natural sciences (Zagaria et al. 2020 ). In addition to these factors, the soft status of psychology also arises from the nature of the questions that give impetus to psychological research. In as far as those questions are curiosity-driven—which is to say that they profess an “informed ignorance” and an honest curiosity about the world—they afford us a way to meaningfully assess progress (scientific or otherwise). Honest, curiosity-driven questions are both a confession of ignorance and a profession of openness to the as-yet-unknown that lies beyond. “Thoroughly conscious ignorance is the prelude to every real advance in science” (James Clerk Maxwell quoted in Firestein 2012 , p. 7).

While reflecting on the nature of particular questions or of questions in general can feel philosophically naïve or overwhelmingly complicated in turn, the matter is worth raising here in this brief piece as in practice psychologists rarely explicitly engage in such reflection. Psychologists rarely, in practice , publicly profess ignorance, and do so even more rarely as a badge of honor, as an assertion of hope, as a rallying cry. Rather than reflecting on the sense of wonder that can appear in our ignorance, psychologists often in effect seem to want themselves and others to wonder at what is already known. Rather than exploring the unknown, psychologists focus more on the methods we use to get to know it, and the theories we hope to developed to know it better. By focusing exceedingly on issues of method, methodology, hypothesis, theory, or goals, psychologists often lose sight of the object of study, and in the process the curiosity driving our questions about the world often becomes of secondary importance. Unless we are pushed forward by our ignorance and curiosity, and allow ourselves to be pulled forward by what we find outside of ourselves in response, the scientific status of psychology will remain standing on what Zagaria et al. ( 2020 ) call “clay feet.” The question of psychology’s scientific status and the possibilities of scientific progress in the field pertain not so much to whether or not psychology can stand on its own, as whether or not it has anywhere to go.

Progress Arising from Informed Ignorance

The notion of progress in science is both exceedingly popular and particularly slippery (e.g., Debrock and Hulswit 1994 ; Laudan 1978 ). Our current understanding of scientific progress developed in the West over the past several centuries, and has been understood in several different ways over that time (for a particularly helpful explication of that history, see Harrison 2015 ). In as far as we can speak of science centuries ago, the idea of scientific progress was initially understood as the perennially new, individual-level cultivation of internal virtue with the assistance of a semiotically pregnant world (as in Aquinas’s understanding of scientia ). More recently, it has come to be seen as a linear, collective-level accumulation of external, objective data from a mechanistic world. The philosopher Charles Taylor ( 2007 ) refers to these changes as part of the shift from an “enchanted” to a “disenchanted” world. Over the course of this transition, science came to be understood as an endeavor divorced from other areas of knowledge, such as those found in the humanities and the arts (Daston and Galison 2007 ). For many, science has come to be seen not just as a method, but also a philosophy; a shift that others find problematic (e.g., Sheen 2019 ). Psychology emerged as an independent field in the nineteenth century during a particularly intense period of this transition, and debates regarding psychology’s academic allegiances were there from the very beginning (Rzepa and Dobroczyński 2019 ; Valsiner 2012 ). Much like the historical development of nationally-conscious languages whose promotors made decisions regarding phonetics, grammar, and lexicon so as to differentiate their languages from those of neighboring peoples (Snyder 2003 ), psychologists worked very hard to distance the field from those with which it was the most similar, particularly philosophy (e.g., as seen in the work of Gustav Fechner on “psychophysics,” in the famous laboratory of Wilhelm Wundt, or in the intellectual development and interesting career path of Władysław Heinrich between psychology, philosophy, and pedagogy; Danziger 1990 ; Rzepa and Dobroczyński 2019 ). Many early psychologists fought to steer psychology in the direction of the natural sciences, and in doing so, largely adopted a “disenchanted” understanding of progress.

Closely related to the notion of scientific progress, and similarly complex, is an activity that also appears at first glance to be patently simple, namely, asking questions (Firestein 2012 ). The nature of the questions we ask about the world gives shape to the nature of our scientific theories. Similarly, our broader understandings of the world give shape to the questions we ask. Thus, questions can be thought of as containing both deductive and inductive elements (and constituting a “chicken or egg” dilemma). Amidst the intellectual battles accompanying the historical “disenchantment” of the world, nineteenth century social scientists were well aware of how the questions we pose are expressions not only of what we don’t know, but also of our intellectual allegiances to existing schools of thought (e.g., Sheen 2019 ). For example, Auguste Comte (1798–1857) wanted the new field of “social physics” (later “sociology”) to be a purely “scientific” endeavor, free from metaphysical or philosophical musings. In its allegiance to empirical science and its rejection of metaphysics, Comte’s positivism finds expression in the questions he believed social scientists should ask and therefore “takes the terrestrial horizon as its boundary, without prejudice to what might lie beyond it. Abandoning too high a level of speculation, it retired to lowlier positions of more immediate interest” (de Lubac 1995 , p. 159). Thus, in the writings of many early social scientists—academics who were acutely aware that the intellectual allegiances and academic independence of their newly emerging fields were in doubt—we see a clear awareness that questions come in all shapes and sizes, and that not all questions are equal regarding the direction in which they propel us (moral and value judgments aside) (Sehon 2005 ). They were aware that it is not only important to ask questions, but also that it is important to ask questions about the questions we are asking.

Curiosity-Driven Questions

It is not easy to ask meaningful questions, but doing so is of fundamental importance for the advance of any scientific field. Neurobiologist, Stuart Firestein ( 2012 ), has argued that within the natural sciences it is “the right question, asked the right way, rather than the accumulation of more data, that allows a field to progress” (p. 98). In as far as we see its status as a “soft science” as problematic, we can say that psychology, on the whole, is struggling to ask meaningful questions in the right way, that is, in a way that would allow for scientific progress to be made. In as far as psychology can be considered a science or even an academic pursuit, it is important that psychologists reflect on the questions being asked in the field. Asking honest questions indicates an awareness of our ignorance, as well as a curiosity about what lies beneath.

We have been long aware of differences in quality between the questions we ask. For example, C. S. Peirce ( 1958 ) argued that real questions arise from genuine, “living” doubt, not what he calls “paper doubts,” that is, questions that arise from the compulsive need to ask questions—as often found among academics—rather than from honest curiosity and an awareness of one’s own ignorance. “According to Peirce, the doubt that brings forth inquiry must be genuine. It is not sufficient to say or write that one doubts; ‘paper doubt’ does not amount to legitimate disbelief” (Bergman 2009 , p. 16). With this distinction in mind, we can also see that there is a subtle, but fundamental, difference between curiosity-driven questions and hypothesis testing. While not mutually exclusive, and ideally complementary, in practice hypothesis testing often overshadows genuine questions in psychological research. Not only is there the overwhelming pressure in academia to publish “significant findings,” but those findings are generally expected to confirm the initial hypotheses. Within published psychological research there are few surprises. This applies to both quantitative and qualitative research (even in the absence of explicit hypotheses, as is often the case of the latter). In this climate, unsurprisingly, psychologists often become cheerleaders for their own hypotheses (or general expectations). Additional theoretical, political, professional, and social biases also abound (Campbell and Manning 2018 ; Duarte et al. 2015 ; Ferguson and Heene 2012 ; Gerber et al. 2001 ), making the majority of research presented in published articles or at conferences thoroughly predictable. While hypothesis testing can be helpful to the extent that it contains within itself the mechanisms for assessing “significance” and thereby for meaningfully judging outcomes, it is unhelpful to the extent that we favor an outcome before the fact. Thoroughly conscious ignorance requires the meaningful assessment of research outcomes, as afforded by hypothesis testing, but also an openness to, and genuine curiosity of, those outcomes. If the outcomes are essentially contained within the hypotheses, and known in advance, the process is not future oriented, and no progress can be made. Having lost sight of the tension within every honest question between what is known and what is unknown, hypotheses in psychology are often, in practice, rhetorical questions.

Another relatively small, but not unimportant, indicator of this can be found in the questions that appear in the limitations section at the end of research articles (rather than lying at the core of the work). What is more, the limitations or suggestions for future research are often actually backhanded suggestions in support of the given claims (e.g., usually by suggesting extensions or replications). Such practices in effect anchor psychological questions to the past and to the already-known, rather than projecting them into the future and the as-yet-unknown. In as far as we root for a hypothesis, and do so at the expense of genuine, curiosity-driven questions, we run the risk of silencing the data and ignoring the subject. This can also happen when our constructs are too rigidly defined, indicating that a lack of universal agreement about even core constructs in the field is not necessarily a bad thing, that is, as long as this flexibility allows us to turn meaningfully to the data, or rather, to the subject under study. In fact, intentionally loosening up on widely held definitions can shift priority from the researcher and the past (the known), to the subject and the future (the as-yet-unknown), and yet, “That principle is one that is difficult for many scientists to swallow, because it relaxes control, gets the experimenter out of the driver’s seat, and leaves it up to the subjects […] to produce the results” (Firestein 2012 , p. 97). The generative quality of honest questions, as well as the thorough excitement and deep discomfort that such inquiry can cause, is wonderfully expressed in the following statement made by Robert Boyle (1627–1691):

“…an Inquisitive Naturalist finds his work to increase daily upon his hands, and the event of his past Toils, whether it be good or bad, does but engage him into new ones, either to free himself from his scruples, or improve his successes. So, that, though the pleasure of making Physical Discoveries, is, in it self consider’d, very great; yet this does not a little impair it, that the same attempts which afford that delight, do so frequently beget both anxious Doubts, and a disquieting Curiosity.” (quoted in Hunter 2000 , p. 13).

Criteria for Judging the Data

Since at least the times of ancient Greece we have been wrestling with the “dilemma of dualism,” i.e., how it is that the seeker of knowledge can come to know that which is unknown (Debrock and Hulswit 1994 ). How can one recognize that which one has never yet seen? While not resolving this age-old riddle, we can recognize that asking questions presupposes the ability to assess the meaningfulness of what one hears in response. It presupposes the (at least partial) intelligibility of what follows the question; it presumes the ability to see in a response an answer. In this sense, questions contain broader conceptualizations that meaningfully combine individual data. While grounded in what we already know (or think we know), asking curiosity-driven questions is also essentially about the future and our confident movement into that future (Firestein 2012 ; Valsiner 2017 ). Honest, curiosity-driven questions—questions that admit to our ignorance and that express an active yearning for what we do not yet know—can help to propel psychology forward in a manner similar to the development of theory or paradigm. The advances that we see in the natural sciences have come from the ability of scientists in those fields to ask honest, curiosity-driven questions and to meaningfully judge the various responses they get from the natural world in reply. Answered questions inspire new questions, more data are gathered, theories and paradigms emerge to meaningfully organize and compare the “answers” we have found, and the march of progress is set afoot. Such progress is seen relatively rarely in psychology, at least in part, because of the nature of the questions being asked. The range of questions asked by psychologists reflects the wide range of epistemological and ontological positions professed in the field. For example, broadly speaking, there are psychologists who understand psychology to be an empirical, experimental science (e.g., “all psychology must be based on experiment, and that it is quite improper to set aside an area of study labeled ‘experimental’ as if to suggest that other areas of psychology exist which are not experimental”; Bugelski 1951 ), just as there are those who believe it can never be an empirical science (e.g., “an objective, accumulative, empirical and theoretical science of psychology is an impossible project”; Smedslund 2016 , p. 185). These two broad camps exist, and will continue to exist, convinced of the value of their enterprise (Mazur and Watzlawik 2016 ). Gustaw Ichheiser (1897–1969) saw this fundamental tension not as an impasse, but as an opportunity: “[S]ocial scientists should, in my opinion, not aspire to be as ‘scientific’ and ‘exact’ as physicists or mathematicians, but should cheerfully accept the fact that what they are doing belongs to the twilight zone between science and literature” (cited in Rudmin et al. 1987 , p. 171).

Psychology belongs to this twilight zone in part because the body of criteria psychologists use for perceiving data as answers to their questions is broad, inclusive, and often shifting—and arguably more so than in other fields. While literary scholars generally do not use “scientific” criteria to study literature, and physicists generally do not use the judgement criteria of literature within physics, psychologists more readily and more often shift between the criteria associated with one or the other of what C. P. Snow ( 1959 ) called “the two cultures” (i.e., the sciences and the humanities). While many, such as Ichheiser, see this as a strength of psychology, it is also arguably responsible for a considerable degree of confusion (Kagan 2009 ). Although the general conceptual separation of the “two cultures” is a relatively recent historical development (Harrison 2015 ), and one that many have argued is not as clear-cut even today as may appear (Gould 2003 ; McAllister 1996 ; Sullivan 1933 ), in as far as we give credence to this and similar distinctions between fields of study, we ought to take seriously the differences in judgment criteria both between and within fields. This is particularly important within psychology precisely because psychologists often oscillate between various judgment criteria, none of which has primacy within the field. Judging the meaningfulness of data involves the tension between the restrictiveness of judgment criteria that focus our vision and an openness of those criteria so as to not lose sight of their limitations relative to the object of study, however, this essential tension can lose its “bite” if one can in effect shape that balance at will.

As suggested by Ichheiser and others, flexibility in method and methodology within psychology can be a strength, that is, as long as it opens up new avenues for fruitfully studying the subject. After all, polyvalence is understood to be part of our psychological lives (Boesch 1991 ). However, there is a double-sided risk that comes with this flexibility. On the one hand, we can switch between judgment criteria too often and too easily, thereby ultimately devaluing what the subject says to us in our research; after all, criteria for determining “significance” (quantitative or otherwise) are ultimately intended to serve our ability to perceive the subject in ways that would otherwise be unavailable to us. On the other hand, we can swing too far in the other direction, becoming too wedded to any one judgment criteria over others, thereby undercutting the richness of our conceptual toolkit and, more importantly, of the subject. When facing the complexities of the world, judgment criteria are helpful precisely because they allow us to perceive the world, to perceive the subject, in ways that would otherwise have escaped our awareness. Balance in the use of judgment criteria saves us from both “trivial order” and “barbaric vagueness”; from the “extremes of premature closure and narrow-mindedness on the one hand, and interminable indecision and ‘broadmindedness’ on the other” (Aeschliman 1983 , p. 69–71). To simplify matters somewhat, we see this balancing act illustrated in the tension that often emerges in psychology between qualitatively-minded researchers and quantitatively-minded researchers. Qualitatively-minded researchers often criticize quantitative, experimental research as being overly restrictive or even closed to the subject (“forcing the subject into the empirical methods”), while quantitatively-minded researchers often criticize qualitative research as using weak or “wishy-washy” criteria for evaluating the subject. By holding onto our techniques too tightly, we ignore other options, overlook the limits of method, and ultimately restrict potential discoveries. By letting go too much of such criteria, we ignore the power of those techniques and thus we deny the subject the chance to speak to us through them. By either overly restricting or overly expanding our criteria we in effect lose sight of the subject.

Examples of Curiosity-Driven Questions

To suggest that we reflect more on the nature of the questions we ask, or to suggest the value of curiosity-driven questions for psychological science, is not to suggest the value of any concrete question(s) in particular. What is more, given the nature of curiosity-driven questions arising from informed ignorance, it is impossible to identify such questions on face value (that is, on the basis of any particular wording). In other words, while we have been referring to this as a “question,” or rather a type of question, it is in reality more of an approach or practice. Just as the question “Why am I sick?” can be scientific, moral, rhetorical, etc. (Sehon 2005 ), any particular motivating question posed by psychologists can be of various natures. Similarly, as curiosity-driven questions are defined by their relation to informed ignorance and their receptiveness to the subject, they necessarily extend in time and space beyond what we usually think of as the question itself. Thus, any example of such a question would need to explore the foundations provided by informed ignorance and the manner in which the question indicates a responsiveness to the subject (and the “data”). For this reason it is perhaps more accurate to think of them as research practices or processes.

An example of such a process can be seen in psychological research on consciousness. The notion of consciousness is of fundamental importance within psychology, but also in broader, non-academic discussions. It is also one of those often used, but incredibly “fuzzy,” concepts within the field (Zagaria et al. 2020 ). One area of research within this general topic concerns whether non-human animals have what we call consciousness. As we remain somewhat unsure about just exactly what this term means, it is particularly difficult to look for it empirically. However, as researchers have learned to let go of their “human biases” and listened to their non-human subjects, they have expanded their view on how consciousness might “look” in non-human animals and how we might study it there (Firestein 2012 ). For example, rather than expecting consciousness to appear like a conversation between two adult humans, researchers have been making strong claims for the presence of consciousness in a wide range of animals (e.g., Plotnik et al. 2010 ). In a related line of research, work on theory of mind (ToM) continues to produce new tests to identify the appearance and development of such elements of consciousness in children at various ages (Jakubowska and Bialecka-Pikul 2020 ; Wellman et al. 2001 ). This has required researchers to in effect stop thinking like adult scientists and to start thinking like younger and younger children. By listening to how children see and interact with the world, psychologists have come to better understand the development of the perception of mental states, both one’s own and those of others. In the case of both children and animals, psychologists have been asking questions out of a position of informed ignorance and they have been listening to their subjects. As a result, not only is it fairly safe to say that progress has been made, but new questions have emerged, and continue to emerge, as a result. Such research also builds the collective body of knowledge on the subject, a hallmark of modern science (Harrison 2015 ), and such curiosity towards the subject makes the research of others relevant, especially more recent research—another hallmark of “hard” science (Zagaria et al. 2020 ).

Another area of research that constitutes a nice example of the kinds of questions here under consideration concerns the nature of memory. Psychologists have been interested in the nature of memory since the very beginnings of the field. When looking across the history of memory research, hiccups and oddities aside, one would be hard-pressed to defend the position that progress has not been made, even “scientific” progress. Across a wide range of research methodologies and methods, a diverse group of researchers have expanded our knowledge of the complex, plastic, and dynamic nature of memory (e.g., Lamprecht and LeDoux 2004 ). Researchers have been open to their subject(s) and as a result, have come to suggest radical changes to our conceptualizations of memory, or rather, memories (Bourtchouladze 2004 ). That the issue of memory continues to puzzle and fascinate researchers indicates not the scientific failure of the field, but the expansive, generative nature of honest inquiry.

Another example can be seen in research over the past several decades on gender within cases of “Disorders/Differences of Sex Development” (DSD; formerly called intersex, hermaphroditism, pseudohermaphroditism, sex errors of the body, or ambiguous genitalia). DSD should not be confused with what is known as “gender dysphoria” (APA 2013 ), that is, cases in which a person believes their gender identity to not match their biological sex or the gender to which they were assigned. DSDs have been defined as “congenital conditions in which development of chromosomal, gonadal, or anatomical sex is atypical” (Lee et al. 2006 ). Cases of DSD challenge traditional thinking regarding gender identity development by showing that such development is complex, involving numerous prenatal and postnatal factors. For example, someone could be born with 46,XY chromosomes (i.e., a male karyotype), internal testes and no internal female reproductive organs, but female appearing external genitalia. Such a person is likely to be assigned a female at birth and/or thought of as female by their family, thereby beginning a process of female gender identity development which is discordant with their chromosomal and gonadal status. What is more, in the presence of complete androgen insensitivity syndrome (CAIS), such an individual may come to further develop an external female phenotype (e.g., developing breasts). Similarly, there is a large and growing body of research on “classical” congenital adrenal hyperplasia (CAH) in individuals with a 46,XX karyotype, wherein these individuals show marked hormone abnormalities (e.g., related to androgens), which result in various forms and degrees of “masculinization” (Meyer-Bahlburg 2014 ). DSDs raise fascinating questions regarding the nature of gender, not to mentioning the countless other areas of life connected thereto (e.g., sexual attraction, sexual functioning, reproduction, various non-sexual behavioral patterns, self-image, cognitive functioning, and mental health).

Scientific knowledge regarding various DSDs has increased over the past several decades (e.g., Lee et al. 2006 ; Lee et al. 2016 ), as has our understanding of the biological, psychosocial, and cultural factors that contribute to what we generally call gender, including “gendered behavior” and “gender identity” (Meyer-Bahlburg 2014 ). By better understanding DSDs in particular, we are able to better understand gender in general. What is more, not only does there remain a great wealth of ignorance in this area from which curiosity-driven questions can arise, but the more “informed” that ignorance becomes, the more curiosity-driven questions emerge. However, psychological research in this areas is not only concerned with scientific progress, as generally understood, but also with issues related to such matters as quality of life, quality of relationships, purpose, sense of self, and personal growth, issues which may or may not be what we would often consider “scientific” considerations. What is more, there is also a considerable and growing amount of activism related to DSDs. Progress can mean different things to different people working in this field. In other words, in cases of DSD (as elsewhere in psychology, including work on consciousness and memory), the questions psychologists ask are not necessarily those that lead to scientific progress. In fact, psychological work in this area is often clearly non-scientific. There also remains considerable disagreement regarding just what the science is supposed to say beyond the realms of method. Taken together, this is an example of the “twilight zone” of which Ichheiser spoke and with which he was both comfortable and confident.

Within research on all three of the areas discussed above we not only find strong claims that scientific progress has been made, but also strong claims that a considerable portion of psychological work is not scientific. What is more, there are also disagreements about the ways in which, or the degree to which, science can, or should, inform other areas of our lives. Again, this is not a judgement regarding the value of scientific or non-scientific work, but merely to point out the existence of differing undertakings within the field. One such division can be found between the assertion of the known that we see in advocacy and the assertion of the unknown that we see in science. Much has been written about the complex relationship between science and advocacy, and it is a topic that extends well beyond the scope of the current piece. However, for practical purposes it is worth reflecting on a basic difference between science and advocacy, even if that involves somewhat stereotyped and simplified images of both. Broadly speaking, while advocacy involves the promotion of what we know, or think we know, science promotes our ignorance and directs us towards what we do not yet know—and it does so again and again. Advocacy can stymie scientific progress by restricting what can and cannot be asked, by pushing an agenda regardless of what research might be produced—in other words, by restricting our access to the subject. However, advocacy can also encourage researchers to pose new, curiosity-driven questions that are responsive to the subject. Just as advocacy can inspire or restrict science, science can both support and/or hamper advocacy. There is no inherent moral value to this conflict, as science and advocacy can work both wonders and horrors. While the intertwining natures of science and advocacy is certainly much more complex than this, this distinction remains of considerable practical use, especially when looking at the questions that drive psychological work.

The point here is that if we listen closely to the questions psychologists ask today we will often hear many different theoretical positions and numerous fundamentally different understandings of progress (e.g., see the questions posed by 33 “influential psychologists” in APA 2018 ). In that regard, such distinctions as the science/advocacy difference (among others) remain of practical utility, at least in as far as psychologists are concerned with the scientific development of the field, or with progress in the field (however defined). As discussed above, within discussions of the scientific status of psychology it is worth reflecting on the assumptions contained in the questions posed, the degree to which they are future-oriented, and the degree to which they are responsive to, and receptive of, the subject. At the same time, not all meaningful and valuable questions need to lay the foundations for scientific progress, or any kind of progress for that matter (e.g., Sehon 2005 ). Nevertheless, questions remain of fundamental importance for science and for the notion of progress, scientific or otherwise.

Ideally, theory both grounds us in what we know and propels us into the unknown. In light of this, it is certainly worth reflecting on, and attempting to readjust, the current imbalance in psychology between data and theory. At the same time, it is worth examining another important part of the research process, namely, the nature of our questions. What is it that captures our genuine wonder? Where does the recognition of our ignorance simultaneously evoke a belief that we can overcome it? It is there that we find the liminal space between the known and the as-yet-unknown; it is there that we can being to meaningfully trace potential progress and to identify the nature of that progress (not all of which will be “scientific” as currently understood). In our concern for the progress of “psychological science,” it is worth reflecting on the potentially progressive quality of our questions. Do they reflect an honest curiosity about the world? Are they boldly expressive and humbly receptive? To the extent that they balance between the known and the unknown, and between the “trivial order” and the “barbaric vagueness” of the given field, it is important that we recognize that not all questions are equal. Questions come in a wide range of types and they serve all sorts of ends, not all of which bespeak the possibility of scientific progress as currently understood. This is not in itself problematic, far from it. Yet, for psychologists concerned with the scientific status of the field or with the possibility of its progress as an empirical science, it is worth asking if our questions provide fertile ground for the notion of such progress in the first place.

Aeschliman, M. D. (1983). The restitution of man. C. S. Lewis and the case against scientism . Grand Rapids: William B. Eerdmans Publishing Company.

Google Scholar  

APA, American Psychiatric Association. (2013). Diagnostic and statistical Manuel of mental disorders (5th ed.). Arlington, VA: Author.

Book   Google Scholar  

APA, American Psychological Association (2018). What’s next? We asked 33 influential psychologists to identify the critical questions the discipline must answer. Available online at

Bergman, M. (2009). Peirce’s philosophy of communication. The rhetorical underpinnings of the theory of signs . New York, NY: Continuum International Publishing Group.

Boesch. (1991). Symbolic action theory and cultural psychology . Berlin, Germany: Springer.

Bourtchouladze, R. (2004). Memories are made of this: How memory works in humans and animals . New York, NY: Columbia University Press.

Bugelski, B. R. (1951). A first course in experimental psychology . New York, NY: Henry Holt and Company.

Campbell, B., & Manning, J. (2018). The rise of victimhood culture. Microaggressions, sage spaces, and the new culture wars . New York, NY: Pelgrave.

Danziger, K. (1990). Constructing the subject. Historical origins of psychological research . Cambridge, UK: Cambridge University Press.

Daston, L., & Galison, P. (2007). Objectivity . Brooklyn, NY: Zone Books.

de Lubac, H. (1995). The drama of atheist humanism . San Francisco, CA: Ignatius Press.

Debrock, G., & Hulswit, M. (Eds.). (1994). Living doubt: Essays concerning the epistemology of Charles Sanders Peirce . Dordrecht, Netherlands: Kluwer.

Duarte, J. L., Crawford, J. T., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. E. (2015). Political diversity will improve social psychological science. Behavioral and Brain Sciences, 38 , 1–13.

Article   Google Scholar  

Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science, 7 (6), 555–561.

Firestein, S. (2012). Ignorance. How it drives science . New York: Oxford University Press.

Gerber, A. S., Green, D. P., & Nickerson, D. (2001). Testing for publication bias in political science. Political Analysis, 9 (4), 385–392.

Gould, S. J. (2003). The hedgehog, the fox, and the magister’s pox . New York: Harmony Books.

Harrison, P. (2015). The territories of science and religion . Chicago: University of Chicago Press.

Hunter, M. (2000). Robert Boyle (1627–91). Scrupulosity and science . Woodbridge, UK: The Boydell Press.

Jakubowska, J., & Bialecka-Pikul, M. (2020). A new model of the development of deception: Disentangling the role of false-belief understanding in deceptive ability. Social Development, 29 (1), 21–40.

Kagan, J. (2009). The three cultures. In Natural sciences, social sciences, and the humanities in the 21 st century . Cambridge, UK: Cambridge University Press.

Lamprecht, R., & LeDoux, J. (2004). Structural plasticity and memory. Nature Reviews Neuroscience, 5 (1), 45–54.

Laudan, L. (1978). Progress and its problems. Towards a theory of scientific growth . Berkeley: University of California Press.

Lee, P. A., Houk, C. P., Ahmed, S. F., & Hughes, I. A. (2006). Consensus statement on management of intersex disorders. International consensus conference on intersex. Pediatrics, 118 (2), 488–500.

Lee, P. A., Nordenström, A., Houk, C. P., Ahmed, S. F., Auchus, R., Baratz, A., Baratz Dalke, K., Liao, L.-M., Lin-Su, K., Looijenga, L. H. J., Mazur, T., Meyer-Bahlburg, H. F. L., Mouriquand, P., Quigley, C. A., Sandberg, D. E., Vilain, E., Witchel, S., & the Global DSD Update Consortium. (2016). Global disorders of sex development update since 2006: Perceptions, approach and care. Hormone Research in Pædiatrics, 85 (3), 158–180.

Mazur, L. B., & Watzlawik, M. (2016). Debates about the scientific status of psychology: Looking at the bright side. Integrative Psychological and Behavioral Science, 50 (4), 555–567.

McAllister, J. W. (1996). Beauty and revolution in science . Ithaca, NY: Cornell University Press.

Meyer-Bahlburg, H. F. L. (2014). Psychoendocrinology of congenital adrenal hyperplasia. In M. I. New, A. Parsa, B. W. O’Malley, O. Lekarev, T. T. Yuen, & G. D. Hammer (Eds.), Genetic steroid disorders (pp. 285–300). Cambridge, MA: Academic Press.

Chapter   Google Scholar  

Peirce, C. S. (1958). Collected papers: Vols. 1–6 (C. Hartshorne & P. Weiss, Eds.). Cambridge, MA: Harvard University press. Scruton, R. (2019). Fools, frauds and firebrands. Thinkers of the new left . London, UK: Bloomsbury.

Plotnik, J. M., de Waal, F. B. M., Moore 3rd, D., & Reiss, D. (2010). Self-recognition in the Asian elephant and future directions for cognitive research with elephants in zoological settings. Zoo Biology, 29 (2),179–191.

Rudmin, F., Trimpop, R. M., Kryl, I., & Boski, P. (1987). Gustav Ichhieser in the history of social psychology: An early phenomenology of social attribution. British Journal of Social Psychology, 26 , 165–180.

Rzepa, T., & Dobroczyński, B. (2019). Historia polskiej myśli psychologicznej . Warszawa: PWN.

Sehon, S. (2005). Teleological realism . Cambridge, MA: MIT Press.

Sheen, F. J. (2019). The philosophy of science . Providence, RI: Cluny Media.

Smedslund, J. (2016). Why psychology cannot be an empirical science. Integrative Psychological and Behavioral Science, 50 (2), 185–195.

Snow, C. P. (1959). The two cultures . London, UK: Cambridge University Press.

Snyder, T. (2003). The reconstruction of nations. Poland, Ukraine, Lithuania, Belarus, 1569–1999 . New Have: Yale University Press.

Sullivan, J. W. N. (1933). The limitations of science. New York, NY: Viking Press.

Taylor, C. (2007). A secular age . Cambridge, MA: Harvard University Press.

Valsiner, J. (2012). A guided science: History of psychology in the mirror of its making . New Brunswick: Transaction Publishers.

Valsiner, J. (2017). From methodology to methods in human psychology . Cham, Switzerland: Springer.

Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-analysis of theory-of-mind development: The truth about false belief. Child Development, 72 (3), 655–684.

Zagaria, A., Andò, A., & Zennaro, A. (2020). Psychology: A giant with feet of clay. Integrative Psychological and Behavioral Science.

Download references

There is no funding to report.

Author information

Authors and affiliations.

Jagiellonian University, Krakow, Poland

Lucas B Mazur

Sigmund Freud University, Berlin, Germany

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lucas B Mazur .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Ethical Approval

Not applicable.

Informed Consent

Additional information, publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit .

Reprints and permissions

About this article

Mazur, L.B. Progress in Psychological Science. The Importance of Informed Ignorance and Curiosity-Driven Questions. Integr. psych. behav. 54 , 613–624 (2020).

Download citation

Published : 25 May 2020

Issue Date : September 2020


Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Philosophy of science
  • Find a journal
  • Publish with us
  • Track your research

A generative AI reset: Rewiring to turn potential into value in 2024

It’s time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI’s enormous potential value is harder than expected .

With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI transformations: competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale—in effect, rewiring the business  for distributed digital and AI innovation.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Companies looking to score early wins with gen AI should move quickly. But those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work actually gets done.

Let’s briefly look at what this has meant for one Pacific region telecommunications company. The company hired a chief data and AI officer with a mandate to “enable the organization to create value with data and AI.” The chief data and AI officer worked with the business to develop the strategic vision and implement the road map for the use cases. After a scan of domains (that is, customer journeys or functions) and use case opportunities across the enterprise, leadership prioritized the home-servicing/maintenance domain to pilot and then scale as part of a larger sequencing of initiatives. They targeted, in particular, the development of a gen AI tool to help dispatchers and service operators better predict the types of calls and parts needed when servicing homes.

Leadership put in place cross-functional product teams with shared objectives and incentives to build the gen AI tool. As part of an effort to upskill the entire enterprise to better work with data and gen AI tools, they also set up a data and AI academy, which the dispatchers and service operators enrolled in as part of their training. To provide the technology and data underpinnings for gen AI, the chief data and AI officer also selected a large language model (LLM) and cloud provider that could meet the needs of the domain as well as serve other parts of the enterprise. The chief data and AI officer also oversaw the implementation of a data architecture so that the clean and reliable data (including service histories and inventory databases) needed to build the gen AI tool could be delivered quickly and responsibly.

Our book Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI (Wiley, June 2023) provides a detailed manual on the six capabilities needed to deliver the kind of broad change that harnesses digital and AI technology. In this article, we will explore how to extend each of those capabilities to implement a successful gen AI program at scale. While recognizing that these are still early days and that there is much more to learn, our experience has shown that breaking open the gen AI opportunity requires companies to rewire how they work in the following ways.

Figure out where gen AI copilots can give you a real competitive advantage

The broad excitement around gen AI and its relative ease of use has led to a burst of experimentation across organizations. Most of these initiatives, however, won’t generate a competitive advantage. One bank, for example, bought tens of thousands of GitHub Copilot licenses, but since it didn’t have a clear sense of how to work with the technology, progress was slow. Another unfocused effort we often see is when companies move to incorporate gen AI into their customer service capabilities. Customer service is a commodity capability, not part of the core business, for most companies. While gen AI might help with productivity in such cases, it won’t create a competitive advantage.

To create competitive advantage, companies should first understand the difference between being a “taker” (a user of available tools, often via APIs and subscription services), a “shaper” (an integrator of available models with proprietary data), and a “maker” (a builder of LLMs). For now, the maker approach is too expensive for most companies, so the sweet spot for businesses is implementing a taker model for productivity improvements while building shaper applications for competitive advantage.

Much of gen AI’s near-term value is closely tied to its ability to help people do their current jobs better. In this way, gen AI tools act as copilots that work side by side with an employee, creating an initial block of code that a developer can adapt, for example, or drafting a requisition order for a new part that a maintenance worker in the field can review and submit (see sidebar “Copilot examples across three generative AI archetypes”). This means companies should be focusing on where copilot technology can have the biggest impact on their priority programs.

Copilot examples across three generative AI archetypes

  • “Taker” copilots help real estate customers sift through property options and find the most promising one, write code for a developer, and summarize investor transcripts.
  • “Shaper” copilots provide recommendations to sales reps for upselling customers by connecting generative AI tools to customer relationship management systems, financial systems, and customer behavior histories; create virtual assistants to personalize treatments for patients; and recommend solutions for maintenance workers based on historical data.
  • “Maker” copilots are foundation models that lab scientists at pharmaceutical companies can use to find and test new and better drugs more quickly.

Some industrial companies, for example, have identified maintenance as a critical domain for their business. Reviewing maintenance reports and spending time with workers on the front lines can help determine where a gen AI copilot could make a big difference, such as in identifying issues with equipment failures quickly and early on. A gen AI copilot can also help identify root causes of truck breakdowns and recommend resolutions much more quickly than usual, as well as act as an ongoing source for best practices or standard operating procedures.

The challenge with copilots is figuring out how to generate revenue from increased productivity. In the case of customer service centers, for example, companies can stop recruiting new agents and use attrition to potentially achieve real financial gains. Defining the plans for how to generate revenue from the increased productivity up front, therefore, is crucial to capturing the value.

Upskill the talent you have but be clear about the gen-AI-specific skills you need

By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop. Those with existing AI and machine learning (ML) capabilities have a strong head start. Data engineers, for example, can learn multimodal processing and vector database management, MLOps (ML operations) engineers can extend their skills to LLMOps (LLM operations), and data scientists can develop prompt engineering, bias detection, and fine-tuning skills.

A sample of new generative AI skills needed

The following are examples of new skills needed for the successful deployment of generative AI tools:

  • data scientist:
  • prompt engineering
  • in-context learning
  • bias detection
  • pattern identification
  • reinforcement learning from human feedback
  • hyperparameter/large language model fine-tuning; transfer learning
  • data engineer:
  • data wrangling and data warehousing
  • data pipeline construction
  • multimodal processing
  • vector database management

The learning process can take two to three months to get to a decent level of competence because of the complexities in learning what various LLMs can and can’t do and how best to use them. The coders need to gain experience building software, testing, and validating answers, for example. It took one financial-services company three months to train its best data scientists to a high level of competence. While courses and documentation are available—many LLM providers have boot camps for developers—we have found that the most effective way to build capabilities at scale is through apprenticeship, training people to then train others, and building communities of practitioners. Rotating experts through teams to train others, scheduling regular sessions for people to share learnings, and hosting biweekly documentation review sessions are practices that have proven successful in building communities of practitioners (see sidebar “A sample of new generative AI skills needed”).

It’s important to bear in mind that successful gen AI skills are about more than coding proficiency. Our experience in developing our own gen AI platform, Lilli , showed us that the best gen AI technical talent has design skills to uncover where to focus solutions, contextual understanding to ensure the most relevant and high-quality answers are generated, collaboration skills to work well with knowledge experts (to test and validate answers and develop an appropriate curation approach), strong forensic skills to figure out causes of breakdowns (is the issue the data, the interpretation of the user’s intent, the quality of metadata on embeddings, or something else?), and anticipation skills to conceive of and plan for possible outcomes and to put the right kind of tracking into their code. A pure coder who doesn’t intrinsically have these skills may not be as useful a team member.

While current upskilling is largely based on a “learn on the job” approach, we see a rapid market emerging for people who have learned these skills over the past year. That skill growth is moving quickly. GitHub reported that developers were working on gen AI projects “in big numbers,” and that 65,000 public gen AI projects were created on its platform in 2023—a jump of almost 250 percent over the previous year. If your company is just starting its gen AI journey, you could consider hiring two or three senior engineers who have built a gen AI shaper product for their companies. This could greatly accelerate your efforts.

Form a centralized team to establish standards that enable responsible scaling

To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move. The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs. The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources.

While developing Lilli, our team had its mind on scale when it created an open plug-in architecture and setting standards for how APIs should function and be built.  They developed standardized tooling and infrastructure where teams could securely experiment and access a GPT LLM , a gateway with preapproved APIs that teams could access, and a self-serve developer portal. Our goal is that this approach, over time, can help shift “Lilli as a product” (that a handful of teams use to build specific solutions) to “Lilli as a platform” (that teams across the enterprise can access to build other products).

For teams developing gen AI solutions, squad composition will be similar to AI teams but with data engineers and data scientists with gen AI experience and more contributors from risk management, compliance, and legal functions. The general idea of staffing squads with resources that are federated from the different expertise areas will not change, but the skill composition of a gen-AI-intensive squad will.

Set up the technology architecture to scale

Building a gen AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely. We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months. That’s why, our experience shows, the actual model costs may be less than 10 to 15 percent of the total costs of the solution.

Building for scale doesn’t mean building a new technology architecture. But it does mean focusing on a few core decisions that simplify and speed up processes without breaking the bank. Three such decisions stand out:

  • Focus on reusing your technology. Reusing code can increase the development speed of gen AI use cases by 30 to 50 percent. One good approach is simply creating a source for approved tools, code, and components. A financial-services company, for example, created a library of production-grade tools, which had been approved by both the security and legal teams, and made them available in a library for teams to use. More important is taking the time to identify and build those capabilities that are common across the most priority use cases. The same financial-services company, for example, identified three components that could be reused for more than 100 identified use cases. By building those first, they were able to generate a significant portion of the code base for all the identified use cases—essentially giving every application a big head start.
  • Focus the architecture on enabling efficient connections between gen AI models and internal systems. For gen AI models to work effectively in the shaper archetype, they need access to a business’s data and applications. Advances in integration and orchestration frameworks have significantly reduced the effort required to make those connections. But laying out what those integrations are and how to enable them is critical to ensure these models work efficiently and to avoid the complexity that creates technical debt  (the “tax” a company pays in terms of time and resources needed to redress existing technology issues). Chief information officers and chief technology officers can define reference architectures and integration standards for their organizations. Key elements should include a model hub, which contains trained and approved models that can be provisioned on demand; standard APIs that act as bridges connecting gen AI models to applications or data; and context management and caching, which speed up processing by providing models with relevant information from enterprise data sources.
  • Build up your testing and quality assurance capabilities. Our own experience building Lilli taught us to prioritize testing over development. Our team invested in not only developing testing protocols for each stage of development but also aligning the entire team so that, for example, it was clear who specifically needed to sign off on each stage of the process. This slowed down initial development but sped up the overall delivery pace and quality by cutting back on errors and the time needed to fix mistakes.

Ensure data quality and focus on unstructured data to fuel your models

The ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its own data. As with technology, targeted upgrades to existing data architecture  are needed to maximize the future strategic benefits of gen AI:

  • Be targeted in ramping up your data quality and data augmentation efforts. While data quality has always been an important issue, the scale and scope of data that gen AI models can use—especially unstructured data—has made this issue much more consequential. For this reason, it’s critical to get the data foundations right, from clarifying decision rights to defining clear data processes to establishing taxonomies so models can access the data they need. The companies that do this well tie their data quality and augmentation efforts to the specific AI/gen AI application and use case—you don’t need this data foundation to extend to every corner of the enterprise. This could mean, for example, developing a new data repository for all equipment specifications and reported issues to better support maintenance copilot applications.
  • Understand what value is locked into your unstructured data. Most organizations have traditionally focused their data efforts on structured data (values that can be organized in tables, such as prices and features). But the real value from LLMs comes from their ability to work with unstructured data (for example, PowerPoint slides, videos, and text). Companies can map out which unstructured data sources are most valuable and establish metadata tagging standards so models can process the data and teams can find what they need (tagging is particularly important to help companies remove data from models as well, if necessary). Be creative in thinking about data opportunities. Some companies, for example, are interviewing senior employees as they retire and feeding that captured institutional knowledge into an LLM to help improve their copilot performance.
  • Optimize to lower costs at scale. There is often as much as a tenfold difference between what companies pay for data and what they could be paying if they optimized their data infrastructure and underlying costs. This issue often stems from companies scaling their proofs of concept without optimizing their data approach. Two costs generally stand out. One is storage costs arising from companies uploading terabytes of data into the cloud and wanting that data available 24/7. In practice, companies rarely need more than 10 percent of their data to have that level of availability, and accessing the rest over a 24- or 48-hour period is a much cheaper option. The other costs relate to computation with models that require on-call access to thousands of processors to run. This is especially the case when companies are building their own models (the maker archetype) but also when they are using pretrained models and running them with their own data and use cases (the shaper archetype). Companies could take a close look at how they can optimize computation costs on cloud platforms—for instance, putting some models in a queue to run when processors aren’t being used (such as when Americans go to bed and consumption of computing services like Netflix decreases) is a much cheaper option.

Build trust and reusability to drive adoption and scale

Because many people have concerns about gen AI, the bar on explaining how these tools work is much higher than for most solutions. People who use the tools want to know how they work, not just what they do. So it’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers.

One insurance company, for example, created a gen AI tool to help manage claims. As part of the tool, it listed all the guardrails that had been put in place, and for each answer provided a link to the sentence or page of the relevant policy documents. The company also used an LLM to generate many variations of the same question to ensure answer consistency. These steps, among others, were critical to helping end users build trust in the tool.

Part of the training for maintenance teams using a gen AI tool should be to help them understand the limitations of models and how best to get the right answers. That includes teaching workers strategies to get to the best answer as fast as possible by starting with broad questions then narrowing them down. This provides the model with more context, and it also helps remove any bias of the people who might think they know the answer already. Having model interfaces that look and feel the same as existing tools also helps users feel less pressured to learn something new each time a new application is introduced.

Getting to scale means that businesses will need to stop building one-off solutions that are hard to use for other similar use cases. One global energy and materials company, for example, has established ease of reuse as a key requirement for all gen AI models, and has found in early iterations that 50 to 60 percent of its components can be reused. This means setting standards for developing gen AI assets (for example, prompts and context) that can be easily reused for other cases.

While many of the risk issues relating to gen AI are evolutions of discussions that were already brewing—for instance, data privacy, security, bias risk, job displacement, and intellectual property protection—gen AI has greatly expanded that risk landscape. Just 21 percent of companies reporting AI adoption say they have established policies governing employees’ use of gen AI technologies.

Similarly, a set of tests for AI/gen AI solutions should be established to demonstrate that data privacy, debiasing, and intellectual property protection are respected. Some organizations, in fact, are proposing to release models accompanied with documentation that details their performance characteristics. Documenting your decisions and rationales can be particularly helpful in conversations with regulators.

In some ways, this article is premature—so much is changing that we’ll likely have a profoundly different understanding of gen AI and its capabilities in a year’s time. But the core truths of finding value and driving change will still apply. How well companies have learned those lessons may largely determine how successful they’ll be in capturing that value.

Eric Lamarre

The authors wish to thank Michael Chui, Juan Couto, Ben Ellencweig, Josh Gartner, Bryce Hall, Holger Harreis, Phil Hudelson, Suzana Iacob, Sid Kamath, Neerav Kingsland, Kitti Lakner, Robert Levin, Matej Macak, Lapo Mori, Alex Peluffo, Aldo Rosales, Erik Roth, Abdul Wahab Shaikh, and Stephen Xu for their contributions to this article.

This article was edited by Barr Seitz, an editorial director in the New York office.

Explore a career with us

Related articles.

Light dots and lines evolve into a pattern of a human face and continue to stream off the the side in a moving grid pattern.

The economic potential of generative AI: The next productivity frontier

A yellow wire shaped into a butterfly

Rewired to outcompete

A digital construction of a human face consisting of blocks

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

research in progress bulletin is an example of

Chemical Society Reviews

Peptide-based self-assembled monolayers (sams): what peptides can do for sams and vice versa.

ORCID logo

* Corresponding authors

a i3S – Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Rua Alfredo Allen, 208, Porto, Portugal E-mail: [email protected]

b INEB – Instituto de Engenharia Biomédica, Universidade do Porto, Rua Alfredo Allen, 208, Porto, Portugal

c ICBAS – Instituto de Ciências Biomédicas Abel Salazar, Universidade do Porto, Rua de Jorge Viterbo Ferreira, 4050-313 Porto, Portugal

Self-assembled monolayers (SAMs) represent highly ordered molecular materials with versatile biochemical features and multidisciplinary applications. Research on SAMs has made much progress since the early begginings of Au substrates and alkanethiols, and numerous examples of peptide-displaying SAMs can be found in the literature. Peptides, presenting increasing structural complexity, stimuli-responsiveness, and biological relevance, represent versatile functional components in SAMs-based platforms. This review examines the major findings and progress made on the use of peptide building blocks displayed as part of SAMs with specific functions, such as selective cell adhesion, migration and differentiation, biomolecular binding, advanced biosensing, molecular electronics, antimicrobial, osteointegrative and antifouling surfaces, among others. Peptide selection and design, functionalisation strategies, as well as structural and functional characteristics from selected examples are discussed. Additionally, advanced fabrication methods for dynamic peptide spatiotemporal presentation are presented, as well as a number of characterisation techniques. All together, these features and approaches enable the preparation and use of increasingly complex peptide-based SAMs to mimic and study biological processes, and provide convergent platforms for high throughput screening discovery and validation of promising therapeutics and technologies.

Graphical abstract: Peptide-based self-assembled monolayers (SAMs): what peptides can do for SAMs and vice versa

  • This article is part of the themed collection: Celebrating the scientific accomplishments of RSC Fellows

Article information

research in progress bulletin is an example of

Download Citation


research in progress bulletin is an example of

C. Redondo-Gómez, P. Parreira, M. C. L. Martins and H. S. Azevedo, Chem. Soc. Rev. , 2024, Advance Article , DOI: 10.1039/D3CS00921A

This article is licensed under a Creative Commons Attribution 3.0 Unported Licence . You can use material from this article in other publications without requesting further permissions from the RSC, provided that the correct acknowledgement is given.

Read more about how to correctly acknowledge RSC content .

Social activity

Search articles by author.

This article has not yet been cited.


U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Evid Based Ment Health
  • v.20(2); 2017 May

Logo of evidmh

What is the impact of a research publication?

An increasing number of metrics are used to measure the impact of research papers. Despite being the most commonly used, the 2-year impact factor is limited by a lack of generalisability and comparability, in part due to substantial variation within and between fields. Similar limitations apply to metrics such as citations per paper. New approaches compare a paper's citation count to others in the research area, while others measure social and traditional media impact. However, none of these measures take into account an individual author's contribution to the paper or the number of authors, which we argue are key limitations. The UK's 2014 Research Exercise Framework included a detailed bibliometric analysis comparing 15 selected metrics to a ‘gold standard’ evaluation of almost 150 000 papers by expert panels. We outline the main correlations between the most highly regarded papers by the expert panel in the Psychiatry, Clinical Psychology and Neurology unit and these metrics, most of which were weak to moderate. The strongest correlation was with the SCImago Journal Rank, a variant of the journal impact factor, while the amount of Twitter activity showed no correlation. We suggest that an aggregate measure combining journal metrics, field-standardised citation data and alternative metrics, including weighting or colour-coding of individual papers to account for author contribution, could provide more clarity.

A number of developments in the metrics field have occurred in recent years, and, in this perspective article, we discuss whether they can inform how judgements are made about the impact of research papers in psychiatry and beyond.

The best-known approach has been to rely on journal impact factors, the most common of which is a 2-year impact factor, which calculates the average number of citations from articles published in the past 2 years of a particular journal. 1 Many arguments against journal impact factors have been outlined, including the skewed nature of citations in most journals, the variation between and within fields (with basic science attracting more citations) and research designs (with systematic reviews being relatively highly cited) and the citation lag time in some research fields being longer than 2 years. 2 A widely used alternative is the number of citations per paper, which can be drawn from research tools such as Scopus ( ) and Google Scholar ( ), with the latter including a broader range of citable items such as online reports and theses. The problem with citation counts is that they vary considerably by research area, and there have been recent attempts to account for this. One of these is the new iCite tool ( ) that normalises the number of citations of a particular paper to the median annual number of citations that NIH-funded papers in the field have received. 3 Finally, alternative metrics have been increasingly used and include tools such as Altmetric ( ), which aims to capture the media and social media interest in a publication, 4 and provides an overall article score and rankings compared with others in the same journal and/or time period.

A key problem with these approaches is that they do not account for an individual author's contribution to a paper, and therefore, high citation rates, h-indexes (for individuals) and iCite scores can be achieved for researchers who have not made significant contributions to a research area. The best example of this is being included as a coauthor of a large treatment trial or genetic consortium, where a researcher's contribution may be mostly in relation to participant recruitment. The Psychiatric Genomics Consortium workgroup for schizophrenia now average over 280 authors, although the number of authors varies considerably and is occasionally placed in the appendix. This highlights another problem with relying on measures of citation in that they do not account for the total number of authors. Accordingly, h-indexes should be routinely provided for papers where the author is first, corresponding or last author (and second author in psychology).

Another limitation is that some of these metrics are subject to measurement error, and some can be gamed. To take an example of the latter, Altmetric scores can potentially be artificially increased by robots that repost press releases. At the same time, they may not pick up all media activity if the article is not cited accurately or embedded in a hyperlink. In addition, they are considerably higher in studies on exercise, diet and lifestyle, and in areas that attract controversy. 5 Although some research has shown some correlation between alternative metrics and citations scores, 6 particularly early on after publication, they do not account for the inherent problems outlined above about the extent of an individual's contribution, normalisation by field and measurement error.

An important natural experiment has been undertaken in the UK where a very large sample of papers (k=148 755) was investigated against a gold standard of peer review as part of the 2014 Research Exercise Framework (or REF 2014). The REF was a national exercise undertaken to assess research from 2008 to 2013 in higher education institutions in the UK, which succeeded an earlier process (called the Research Assessment Exercise in 2008). It determined the extent of central government basic research funding for these institutions until the time of the next evaluation (thought to be in 2021). Three factors were considered—outputs (which made up 65% of the overall quality profile), impact (20%) and environment (15%). A detailed bibliometric analysis of the output data was published and provides a breakdown by unit of assessment. 7 Each eligible academic typically submitted four outputs for the REF. Here, we will discuss the assessment block that most departments of psychiatry and psychology will have entered, namely Unit 4 (Psychiatry, Clinical Psychology and Neurology), which assessed 9086 journal articles. The analysis took 15 different metrics for each paper and analysed to what extent, they were correlated with the final view of the REF panel (which was made up of an expert committee of 39 researchers). The strongest bivariate correlations between these metrics and scoring the highest score per paper are presented in figure 1 . Each paper was measured against a standard of originality, significance and rigour and the best papers were scored a 4*, which represented ‘world leading’, whereas a 3* reflected ‘internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence’. Lower scores of 2*, 1* and unclassified were also given.

An external file that holds a picture, illustration, etc.
Object name is ebmental-20-33-F1.jpg

Strongest bivariate correlations between paper metrics and the highest REF score. *Source-Normalised Impact per Paper; †field-weighted citation impact; ‡Google Scholar citations.

The strongest correlation was with the SCImago Journal Rank, which is a metric based on the notion that not all citations have equivalent weight, and categorises journals per field into four categories (from low to high rank). It assumes that the subject field, quality and reputation of the journal have a direct effect on the value of a citation. This was followed by the absolute number of Scopus citations and the percentile of highly cited publications. The Source-Normalised Impact per Paper attempts to relativise the citation impact by weighting citations based on the total number of citations in a subject field. Thus, the impact of a single citation is given more value in participants where citations are less likely.

Notably, there was no correlation between Twitter activity generated by an article and a top REF score (of ‘4*’), and associations were weak for full-article requests, downloads and reads on one platform (Mendeley, ). Interestingly, for the overall REF which included 36 units, the 3 strongest markers of quality were the SCImago Journal Rank, the Source-Normalised Impact per Paper and the percentile. Correlations tended to be stronger in the sciences than in the arts and humanities.

What this suggests is that the judgement of an expert panel that was constituted to examine a paper's impact, perhaps the closest to a gold standard that is possible, was most strongly correlated with the SCImago Journal Rank, which itself is based mostly on the journal impact factor. This is not surprising as many such journals have more stringent peer and statistical review, insist on adhering to research guidelines, benefit from professional editors, and articles in high-impact journals are often cited to add legitimacy to a particular field of study, and may be included in introductions. Further, the analysis of the REF 2014 suggests that new metrics appear unlikely to replace simpler ones such as journal impact factor and number of citations per year.

So where does this leave someone trying to assess the impact of a paper? As there are difficulties with relying on one metric, we suggest that a combination of metrics should be used. We recommend that those most correlated with expert judgement take priority but can see a role for Altmetrics, with the caveats noted above, as a measure of wider public engagement, impact and interest. In the future, a combined score that takes into account journal impact factor, number of citations, iCite, Altmetric scores and a different colour coding or weighting for those papers where authors have made a substantial contribution (eg, where an author has been first/last/corresponding) would assist in providing some clarity.


SF and AW are funded by the Wellcome Trust.

Twitter: Follow Seena Fazel @seenafazel

Competing interests: None declared.

Provenance and peer review: Not commissioned; internally peer reviewed.


This article is part of the research topic.

(De)Politicizing Climate and Environmental Politics in Times of Crises: Contexts, Strategies and Effects

Now they can cope? The Green Deal and the contested meaning of sustainability in EU sectoral governance Provisionally Accepted

  • 1 Osnabrück University, Germany

The final, formatted version of the article will be published soon.

The European Union (EU) has long discursively positioned itself as a global frontrunner for sustainability and climate protection. Nevertheless, substantive progress towards sustainability goals has not been reached in several governance areas, such as transport and mobility. Especially at the local scale, the highly complex and technocratic EU policy framework is confronted with increasingly polarized claim-making regarding ecological, social and economic problems. With its recent Green Deal governance architecture, the European Commission has sought to address this ideational and institutional fragmentation and resulting stalemate towards reaching "climate neutrality" by proposing ambitious sectoral policies and new governance instruments. This problemdriven paper exploratively investigates the ongoing reconfigurations the Green Deal induces within EU governance. Using the example of the urban mobility sector and employing an interpretive analysis of key policy documents and expert/stakeholder interviews, the paper links the literatures on EU governance architectures and norm dynamics. It discusses potentials and pitfalls for meaningmaking processes in times of the socioecological polycrisis. Notably, it critically evaluates the Green Deal's capacity to open and sustain spaces for translating sustainability across horizontally and vertically fragmented realms of EU governance.

Keywords: Green deal, Transport & Mobility Policy, sustainability, EU Governance, governance architectures

Received: 16 Dec 2023; Accepted: 13 Mar 2024.

Copyright: © 2024 Stockmann. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mr. Nils Stockmann, Osnabrück University, Osnabrück, Germany

People also looked at


  1. FREE 11+ Research Progress Report Samples in MS Word

    research in progress bulletin is an example of

  2. Awesome How To Write Progress Report Of Research Project Example Lab

    research in progress bulletin is an example of

  3. 19+ Progress Report Templates

    research in progress bulletin is an example of

  4. How to Write a Progress Report: A Step-by-Step Guide

    research in progress bulletin is an example of

  5. FREE 11+ Research Progress Report Samples in MS Word

    research in progress bulletin is an example of

  6. Research Project Progress Report Template

    research in progress bulletin is an example of


  1. DAY 1

  2. Research report

  3. Work in progress| Umumi Ijtima Lilong, Manipur

  4. Recommendation report presentation

  5. Razif Progress Report OCT 2023


  1. Research-in-Progress (RIP): Tips

    10 Ways to Make the Most of Your RIP Presentation. 1. Present early and often. Better to reconsider your design before submitting the IRB, collecting data, or writing the manuscript. 2. Present weeks or months before key deadlines. You'll be more willing to incorporate major changes and have time to present again. 3.

  2. Research in Progress papers

    I would give a research-in-progress paper the same format as a typical research paper (using a typical format such as introduction-methods-results-conclusion). However, the content of some of the sections would be different.Using this format as an example: Introduction This would probably be similar to the introduction on your final paper, but it should also give a bit of context on the ...

  3. PDF APA Guide to Preparing Manuscripts for Journal Publication

    Introduction. This guide provides an overview of the process of preparing and submitting a scholarly manuscript for publication in a psychology journal. Drawing on the experiences of authors of scholarly writings, peer reviewers, and journal editors, we seek to demystify the publication process and to offer advice designed to improve a ...

  4. Newsletters, bulletins, blogs, briefs and brochures

    These short communications differ in the following ways: Bulletin or Brief: This offers a very short, "brief" format - either a frequently-circulated update on project progress or a short presentation of evaluation results. It can also be used to present changes decided - for example in a "policy brief" or a bulletin summarising ...

  5. PDF Research Bulletin Template

    ESRI Research Bulletins provide short summaries of work published by ESRI researchers and overviews of thematic areas covered by ESRI programmes of research. Bulletins accessible to a wide readership. This Bulletin summarises the findings from: Cristina Iannelli, Emer Smyth and Markus Klein (2015), Curriculum differentiation and social ...

  6. 13.5 Research Process: Making Notes, Synthesizing ...

    Discussing your research with a trained writing center tutor can help you clarify, analyze, and connect ideas as well as provide feedback on works in progress. Quick Launch: Beginning Questions. You may begin your research log with some open pages in which you freewrite, exploring answers to the following questions.

  7. Preparing and Presenting Effective Research Posters

    Conclusions. Effective research posters should be designed around two or three key findings with accompanying handouts and narrative description to supply additional technical detail and encourage dialog with poster viewers. Keywords: Communication, poster, conference presentation. An assortment of posters is a common way to present research ...

  8. Writing a progress report

    Writing a progress/status report by Michael Ernst January, 2010. Writing a weekly report about your research progress can make your research more successful, less frustrating, and more visible to others, among other benefits. One good format is to write your report in four parts: Quote the previous week's plan. This helps you determine whether ...

  9. PDF Writing a Review Article for Psychological Bulletin

    The inside front cover of Bulletin further notes that reviews "may set forth major developments within a particular research area or provide a bridge between related specialized fields within psychology or between psychology and related fields." As these statements imply, Bulletin review articles are directed to a much wider audience than

  10. Tracking the follow-up of work in progress papers

    However, most prior work has focused on the extension of full papers in academic venues [such as a simple conference to journal extension (Eckmann et al. 2011)] and discussion surrounding specifically work in progress papers is not explored. For example, previous research (Montesi and Owen 2008) has investigated the tendencies, habits and ...

  11. Research-in-Progress Bulletin is an example of:

    The correct answer is CAS.. Key Points. Research in Progress Bulletin: It is a type of current awareness service (CAS). A research-in-progress bulletin usually contains information about the laboratory at which the project is being done, names of principal and associate researchers, funds and sources of funds, duration of the project, and special equipment in use, if any.

  12. Progress in Science

    Progress in Science. This chapter examines theories and empirical findings on the overlapping topics of progress in science and the factors that contribute to scientific discoveries. It also considers the implications of these findings for behavioral and social science research on aging. The chapter first draws on contributions from the history ...

  13. Writing up a Research Report

    the basic type of research design should be named. For example, cross-sectional research. the data sources, for example sample and sampling method, should be briefly described, for example 50 companies in the financial industry sector in Western Europe. the data collection should be reduced to the method's name, e.g., closed survey.

  14. Writing a review article for Psychological Bulletin.

    Guidelines and tips are offered for writing a Psychological Bulletin review article that will be accessible to the widest possible audience. Techniques are discussed for organizing a review into a coherent narrative, and the importance of giving readers a clear take-home message is emphasized. In addition, advice is given for rewriting a manuscript that has been reviewed and returned with an ...

  15. Research

    More examples of template data availability statements, ... Please note that the use of a language editing service is not a requirement for publication in Bulletin of the National Research Centre and does not imply or guarantee that the article will be selected for peer review or accepted ...

  16. Improving what is published: Toward an evidence-based framework for

    Given the critical importance of manuscript review, the development of a more evidence-based approach is needed. Manuscript review is the final step in the research process, and limitations in this aspect of the process may have a negative impact on the quality of published research in psychology and in other disciplines as well. This article examines current findings regarding the strengths ...

  17. 7.3 Progress Reports

    Technical Writing Essentials. 7. COMMON DOCUMENT TYPES. You write a progress report to inform a supervisor, associate, or client about progress you have made on a project over a specific period of time. Periodic progress reports are common on projects that go on for several months (or more). Whoever is paying for this project wants to know ...

  18. A Beginner's Guide to Conducting Reproducible Research

    The Bulletin of the Ecological Society of America is the official record ... others in the scientific community. Sharing data, code, and detailed research methods and results leads to faster progress in methodological development and innovation ... For example, reproducing research performed in 1960 using that era's computational tools would ...

  19. Peer review and the publication process

    Introduction. Publication in academic journals plays an important role in the development and progress of any profession, including nursing (Dipboye 2006).On the one hand, it provides professionals such as nurses with an opportunity to share their examples of best practice and research results with colleagues in the discipline.

  20. Major Depressive Disorder: Advances in Neuroscience Research and

    Neuroscience Bulletin - Major depressive disorder (MDD), also referred to as depression, is one of the most common psychiatric disorders with a high economic burden. ... Therefore, the research progress and the clinical application of new discoveries or new technologies are imminent. In this review, we mainly discuss the current situation of ...

  21. Indian Research in Progress: An Analysis of Shodhgangotri Repository

    The current study is undertaken to report a statistic and study the growth of doctoral research in India under the domains of "library management" in the last 50 years from 1971 to 2020.

  22. Progress in Psychological Science. The Importance of ...

    In recent decades we have seen an exponential growth in the amount of data gathered within psychological research without a corresponding growth of theory that could meaningfully organize these research findings. For this reason, considerable attention today is given to discussions of such broader, higher-order concepts as theory and paradigm. However, another area important to consider is the ...

  23. A generative AI reset: Rewiring to turn potential into value in 2024

    It's time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI's enormous potential value is harder than expected.. With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI ...

  24. Peptide-based self-assembled monolayers (SAMs): what peptides can do

    Self-assembled monolayers (SAMs) represent highly ordered molecular materials with versatile biochemical features and multidisciplinary applications. Research on SAMs has made much progress since the early begginings of Au substrates and alkanethiols, and numerous examples of peptide-displaying SAMs can be found in Celebrating the scientific accomplishments of RSC Fellows

  25. What is the impact of a research publication?

    Abstract. An increasing number of metrics are used to measure the impact of research papers. Despite being the most commonly used, the 2-year impact factor is limited by a lack of generalisability and comparability, in part due to substantial variation within and between fields. Similar limitations apply to metrics such as citations per paper.

  26. Now they can cope? The Green Deal and the contested meaning of

    The European Union (EU) has long discursively positioned itself as a global frontrunner for sustainability and climate protection. Nevertheless, substantive progress towards sustainability goals has not been reached in several governance areas, such as transport and mobility. Especially at the local scale, the highly complex and technocratic EU policy framework is confronted with increasingly ...

  27. Gondwana Research

    Comment on "Biostratinomy of the enigmatic tubular organism Aulozoon soliorum, the Rawnsley Quartzite, South Australia" by Surprenant et al. (2023) Gregory J. Retallack. Pages 18-19. View PDF. Read the latest articles of Gondwana Research at, Elsevier's leading platform of peer-reviewed scholarly literature.