• Create Account
  • Product Overviews
  • Training and Support
  • Education Providers
  • Workforce Development
  • Business and Industry
  • Social Media Newsroom

Training And Support Banner

Assessment Accommodations

  • Assessment Policy
  • Intake Screening
  • Pre- and Post-Testing
  • Scale Score Ranges
  • Security Policy and Agreement

CASAS Testing Accommodations and Accessibility

New to testing accommodations download a step-by-step guide..

CASAS is committed to creating a standardized testing program inclusive of all learners. Assessment accommodations are part of the overarching goal of achieving fairness in testing. Accommodations provide examinees who have disabilities with an opportunity to demonstrate their skills and abilities in educational assessments. Ultimately, the objective is to eliminate potential barriers to the measurement of the intended construct for those who require accommodations (AERA, et al., 2014). Assessment accommodations target a need associated with a specific disability by modifying how test content is presented or modifying the tools a test taker uses to navigate the testing environment and respond to test questions (International Test Commission and Association of Test Publishers, 2022). Standard provisions should be implemented that document how accommodations will be monitored and implemented (AERA, et al., 2014). These adjustments allow learners to demonstrate their true ability level on standardized tests without changing what a test is intended to measure and how scores are interpreted. Any accommodation must be available across pre- and post-testing to ensure that the interpretation of learner performance on each assessment is comparable. It is important to note that not all learners with disabilities will need testing accommodations. CASAS consults with the American Printing House for the Blind (APH) https://www.aph.org/  and the National Center for Accessible Media (NCAM) https://www.wgbh.org/foundation/what-we-do/ncam  to ensure that the accommodations available are responsive to the needs of the widest range of students.

Legislation Related to Accommodations

The accountability standards in the 2014 Workforce Innovation and Opportunity Act (WIOA) include the Rehabilitation Act Amendments of 1998. WIOA, effective July 2015, focuses on learners most in need, such as those with a low level of literacy skills, English language learners, and those who have disabilities. Other legislation addresses provisions related to testing accommodations including the ADA Amendments of 2008, Section 504 (equal opportunity) and 508 (comparable access to and use of electronic information technology) in the Rehabilitation Act Amendments of 2008, and the Individuals with Disabilities Education Improvement Act of 2004. 

Local Agency Responsibility

CASAS describes standard conditions under which tests should be administered to a wide range of students within the target population. This information is in the Test Administration Manuals (TAMs) for each test series. Decisions about changes to these standard conditions, such as how students access, interact, and respond to test content will be made by appropriately trained individuals at the local agency or district level. The following guidelines provide support in making those decisions.

Local agencies are responsible for providing fully accessible services and reasonable accommodations for learners with documented disabilities. Adult learners with disabilities are responsible for requesting accommodations and for submitting documentation of their disability at the time of registration, program entry, or after diagnosis. Official records such as the Individual Education Plan (IEP) document the need for accommodations. The documentation will show that the disability interferes with learners’ abilities to demonstrate their skills on assessments. Information detailing a disability and accommodation sometimes also comes from a doctor’s report, a diagnostic assessment from a certified professional, and other clinical records. If no documentation is available, adult education agencies can often contact the local division of vocational rehabilitation or the learner’s secondary school to request documentation of a disability.

Testing Accommodations

For learners who have documented disabilities, appropriately trained local assessment staff may provide accommodations in test administration procedures based on student documentation.

Examples of testing accommodations for CASAS assessments:

  • Read aloud, sign, or translate test directions word-for-word
  • Read aloud or sign test display, question and answer choices, except when taking a test of reading comprehension. This would interfere with the construct being measured
  • Use a scribe
  • Use an adaptive input device to respond to the test
  • Use a magnifier for paper-based tests
  • Use large-print paper tests and answer sheets Use a reading tracker/highlighter tool
  • Use of screen reader assistive technology and tactile graphics test booklets where appropriate
  • Use a talking calculator for math tests
  • Allow breaks while testing
  • Extend test-taking time
  • Allow flexible test scheduling
  • Provide a distraction-free testing space

Accessibility

Although the term “accessibility” was once associated with test accommodations for students who have disabilities, today it is a consideration for all learners and includes universal test design and any accessibility supports available to all test takers (International Test Commission and Association of Test Publishers, 2022). Accessibility is a part of every phase of the test development process at CASAS. This ensures assessments are inclusive and do not include features that would compromise any test taker from demonstrating their true ability in any modality. CASAS consults with the American Printing House for the Blind (APH) and the National Center for Accessible Media (NCAM) to ensure all assessments are accessible.

Universal Test Design

CASAS adheres to Universal Test Design (UTD) principles in item and test development, with the guiding principle to develop tests accessible to the widest range of students within the target population. This practice aims to ensure valid results and interpretations of test results. Universal test design principles integral to the CASAS item and test development process include (Universal Design of Assessments, n.d.) :

  • Inclusive assessment population
  • Precisely defined constructs
  • Accessible, non-biased items
  • Amenable to accommodations
  • Simple, clear, and intuitive instructions and procedures
  • Maximum readability and comprehensibility
  • Maximum legibility

Accessibility Features in CASAS eTests Online

The following features are available to all test takers using CASAS eTests Online:

  • Preset text sizes (default, large, very large) of question stems and answer sets, toolbar, navigation features and directions
  • Preset background and text color combinations
  • Text and image magnification tool
  • Customizable volume setting for listening tests
  • Touch screen compatibility
  • Keyboard accessible (tab, enter, etc.)
  • Content masking (toolbar, including timer)

American Educational Research Association, American Psychological Association & National Council on Measurement in Education (2014). Standards for Educational and Psychological Testing. Washington, D.C.

International Test Commission and Association of Test Publishers (2022). Guidelines for technology-based assessment. https://www.intestcom.org/page/28  and https://www.testpublishers.org/white-papers

Universal Design of Assessments. (n.d.) . National Center on Educational Outcomes (NCEO). https://nceo.info/Assessments/universal_design/

FileTypeSizeDownload
Screen Reader Proctor Script for Reading STEPS Math GOALS 2 Test Sessions in CASAS eTestsPDF157.66 KB
CASAS Testing Accommodations and Accessibility Features FAQsPDF298.00 KB
CASAS Assessment Accommodations and Accessibility GuidelinesPDF299.95 KB
How To Guide _CASAS eTests with Screen Reader_Reading STEPS and Math GOALS 2PDF298.59 KB
How To Guide _CASAS eTests for Visual Impairment_Listening STEPSPDF232.76 KB
Screen Reader Proctor Script for GOALS and Life and Work Test Sessions in CASAS eTestsPDF205.18 KB
Testing Accommodations Step-by-StepPDF445.00 KB
  • For more information on Special Needs products .
  • For more information on Accommodations Materials for Visual Impairment/Blindness . These materials can also be used to provide accommodations for test takers with other documented disabilities who require a human reader or use of a screen reader during assessments.

© 2024 by CASAS. All rights reserved.

  • Privacy Policy

What are testing accommodations?

Testing is part and parcel of a student’s education journey. Even our students with disabilities also have to sit both formative and high-stakes testing. Tests are periodically given to all students to measure their achievement and understanding. For students with disabilities, special accommodations can be requested to make testing a more level playing field for them. According to the Americans with Disabilities Act, (ADA) testing accommodations are changes to the regular testing environment that allow individuals with disabilities to demonstrate their true aptitude or achievement level on standardized exams or other high-stakes tests. Individual accommodations are determined for each student by his or her IEP team.

In this section:

What types of testing accommodations are available?

What testing accommodations are covered by ada requirements.

Assistive technology and testing accommodations

Webinar: How to Support Students on the State Assessment

An accommodation in testing means that one or more aspects of the testing conditions have been altered so that a student with a disability can fully demonstrate their mastery in any given subject. Testing accommodations generally mirror any accommodations made for the student in the normal classroom environment. They fall under four types of accommodations: timing and scheduling, setting, presentation, and response. Let’s take a look at each of these in a little more detail.

Timing and scheduling accommodations

Changes to timing means that we’re increasing the time allocated to complete a test or assignment. If we were to make a change to the testing schedule, it might mean changing the date or time of the test to better accommodate our students.

Setting accommodations

If a student has a setting accommodation, we’d need to take a look at the location of the test. Where would be better suited for the student to take the test? Or is there something that needs to change about the conditions of the test? For example, would noise canceling headphones be appropriate?

Response accommodations

Making an accommodation to the way that a student responds to testing involves how they complete activities or test answers. This might involve a scribe or some kind of assistive device.

Presentation Accommodations

Making a change to how students access test questions and information falls under presentation accommodations. Changes include giving them access to testing materials without having to rely on standard print.

The ADA outlines a wide range of testing accommodations that might be required by students in special education during formative assessment or high-stakes tests.

Presentation accommodations

Large print Magnification devices Sign language Braille Tactile graphics Human readers Audiotape or CD Audio amplification devices Screen reade

Reduce distractions to the student Reduce distractions to other students Change setting to for accessibility Change setting to allow the use of special equipment Permission to bring and take medications during the exam (for students with diabetes who need to monitor their blood sugar and use insulin).

Human scribe Computer Tape recorder Calculators Spelling and grammar aids

Extended time Frequent breaks Change of schedule or order of activities

Using assistive technology as part of your testing accommodations

Students in special education with documented disabilities must be allowed to use assistive technology (AT) for testing, just like they would in the classroom, everyday. A student’s IEP team usually recommends assistive technology suitable for the student. Parents and teachers can also request AT for a student.

AT used during testing makes sure that every student can show their knowledge and mastery to the best of their abilities. AT is a means of levelling the playing field for students in special education. Testing scenarios where students can use text-to-speech, or speech-to-text technology gives the student the opportunity to sit a test independently, without relying on a teacher or other human intervention.

What technology can be used in testing?

Any tool that a student already uses in a classroom environment may be approved for use during both classroom assessments and in high-stakes testing.

The most common AT accommodations are:

  • Tape recorder
  • Audio Amplification
  • Text-to-Speech software
  • Speech-to-text technology
  • Screen magnifier
  • Screen overlay
  • Word prediction technology

Join us on demand for the third annual Festival of Inclusive Education

This year's line-up of speakers explored the transformative power of a whole school approach to inclusive education, as well as the need for universal support

Sign up now for the festival and discover the steps and actions needed to take to engage every learner.

Testing Accommodations: How to Support Students on the State Assessment

Testing accommodations can pose more questions than answers, including: how do accommodations work with online state assessments, which students should get accommodations, and which tools are available or embedded?

In this on-demand webinar, Ruth Ziolkowski, Senior Vice President of Public Policy and Partnerships for Don Johnston and Texthelp, will guide you to:

  • Understand how various accommodations work in conjunction with state tests
  • Know where to go to get answers to your state-specific questions
  • Screen students to determine if they would benefit from a read-aloud or speech-to-text accommodation
  • Advocate for appropriate accommodations for students with disabilities

Keep reading

Special education.

Understand what special education is and ways we can help our students with special educational needs.

Why does equity in education matter?

Discover the difference between equal opportunity and equity in education. We’ll learn how technology can help DEI in our classrooms.

  • Universal Design for Learning
  • Math and STEM
  • Inclusive Education
  • Special Education
  • Read&Write
  • SpeechStream
  • Inclusive Workplaces
  • Neurodiverse Employees
  • Creating Accessible Content

Resource center

  • Impact Studies
  • Customer Stories

Learn & Support

  • Tech Support
  • Certification Program
  • Toolmatcher
  • Accessibility Roadmap
  • Request a Quote
  • Privacy Policy & Terms of Use
  • Privacy Policy

Testing Accommodations—5 ways to help students “show what they know”

by Mary Pembleton

Girl in striped shirt studying on a laptop

When assessment season rolls around, it brings with it many challenges. Not least among them is managing the logistics of testing accommodations for students with disabilities.

Staff who are responsible for implementing testing accommodations must often contend with complex processes for accessing accommodations.

These processes can be stressful for students and often add to their cognitive load as they navigate systems that are dissimilar to the day-to-day accommodations they typically use. And it can be tough to know exactly how to improve student accessibility during testing.

Paul Auger, Assistive Technology Specialist at The Public Schools of Brookline in Brookline, MA recalls issues with past processes.

If a student had a word prediction accommodation Brookline staff would set up two separate computers: one with the assessment itself, and another where the student could type answers to test questions into a Google Doc using their word prediction.

Once they had finished answering a question, an administrator had to transfer the student’s answer into the test by typing it verbatim.

“This was hard for students with tracking issues because they had to move from one computer to get the question to the other computer to answer, and they would lose track of the question. It was a mess. And it required resources: we had to find two computers and a space for testing.”

But Brookline is able to do things differently now.

Read on to discover how.

What are testing accommodations?

Simply put, testing accommodations are measures put in place to ensure a student can accurately demonstrate their knowledge of what they are being tested on. They are meant to address barriers to a student’s ability to show what they know.

For example, a student with dyslexia who is being tested on reading comprehension may use text-to-speech to accommodate fluency and decoding challenges.

This is because the objective isn’t to test their ability to decode (which may be hindered by dyslexia), but their ability to understand the reading material provided. So, in this case, listening to passages read aloud helps isolate the skill being tested (reading comprehension) so fluency or decoding doesn’t adversely affect the testing construct.

Extra time, speech-to-text, word prediction, and screen-reading technology are common accommodations that support learners with disabilities during testing.

It’s super encouraging that, in general, access to testing accommodations is continuously improving. State standardized tests are now digital, for example, and most have many accommodations embedded into the test, including read-aloud capabilities.

Other types of embedded accommodations are also gaining traction. For example, a recent policy change in Massachusetts means that school districts like Brookline no longer have to supply two separate devices to students with word prediction accommodations. And that means students with disabilities can take their assessments alongside their peers, and staff no longer have to manually transfer a student’s answers into the test.

This is because in Massachusetts and an increasing number of other states, students who qualify for a word-prediction accommodation can use Co:Writer’s word-prediction embedded right into the test, within kiosk mode.

“Now that it’s changed I love it. In Massachusetts, the only thing we need to do to give students access to their Co:Writer accommodations is make sure they click on the link.”

Interoperable, embedded word prediction is a free service offered by Don Johnston Incorporated (through Co:Writer) and Texthelp (through Read&Write).

Texthelp & Don Johnston Logos

Want to help make the testing accommodation experience easier and more accessible for your students and staff? Consider trying the following:

1. Be a mythbuster.

Unfortunately, the myth that accommodations are cheating or offer unfair advantages is a persistent one.

But accommodations for students who need them are as essential to their educational experience as glasses are to a visually impaired student.

Glasses allow visually impaired learners to be able to read their reading and writing, see the board, and participate fully in class.

It’s a similar situation with assistive technology: students with learning differences that interfere with writing, like dysgraphia, often use specialized word prediction tools like Co:Writer to complete assignments.

Dysgraphia impacts not only handwriting, but processes in the brain that allow people to translate their thoughts into writing. Specialized word prediction technology helps connect their thoughts to the page.

For students who rely on accommodations in their daily life, it’s super important that they also have access to the same tools on their assessment.

For students with learning disabilities that impact reading like dyslexia, text-to-speech tools like Snap&Read allow them to access curriculum content. Data gathered from the accommodations screening tool uPAR shows that 60% of students reading below grade level can comprehend at or above grade level with an accommodation.

Infographic uPAR specific user data example

Helping spread the word in your school and greater community can help bust some of the misconceptions and stigmas associated with accommodations.

Jan McSorley, VP of Accessibility at Pearson’s Psychometrics and Testing Services division agrees:

“I think that there’s still a need for awareness raising about disabilities and evaluating assistive technologies and about how people who rely on assistive technologies, learn and complete their work.”

You can even start by sharing this article on your social media

2. consider physical needs & the environment..

Girl in pink shirt studying on a laptop

Addressing physical needs first and foremost sets learners up for a successful test-taking experience and ensures that they can do their best work with their accommodations.

Reducing noise distractions, thinking through all of the tools and equipment that supports each student on a daily basis, and making sure students aren’t too hot or too cold can help.

For example, Jan McSorley recalls a situation from her former work as an assistive technology specialist where a student’s physical need was accidentally missed:

“I was supporting a student who had paralysis on the left side of his body. And on a normal school day, he had a specialized chair that helped keep him balanced so he could concentrate on his schoolwork.”

“On the day of testing, I went in to observe and he didn’t have his chair. He was bubbling in answers without reading the question because all of his effort was going into trying to pull himself up in the chair.”

“It was just a simple oversight, and it wasn’t done out of malice. It was only that nobody really thought through the fact that we need to make sure he’s got exactly the same setup that he has instructionally so that we can measure his knowledge.”

This example can point back to the importance of maintaining other accommodations that students rely on in their daily lives during testing, such as assistive technology.

3. Establish continuity of tools.

What technology tools do your individual students use on a daily basis? Is there a way to make those same tools available on the assessment, within your specific state’s requirements? If a student is familiar with a specific technology, is there a way to provide that same technology on the test?

For example, both Co:Writer and Read&Write offer word prediction in Pearson’s TestNav and other state assessments, in states where policy allows it. Features are automatically limited to conform to state policies.

Discover how Co:Writer’s specialized word prediction helps struggling writers get the right words onto the page

Choosing the option that students are already familiar with will better help support learners demonstrate their knowledge.

Jan McSorley says:

“If we don’t have a setup that students are experiencing instructionally we’re not going to measure their knowledge. We’re going to measure their ability to figure out a new system. We’re going to measure their ability to work around their disability, and we’re not going to get a true measure of what they know.”

4. Share stories with policymakers.

Jan notes that sharing stories about students with disabilities with policymakers is helpful. Unless someone has firsthand experience, it’s impossible to fully understand the difference various accommodations can make.

“I think it’s super important that special education teachers feel empowered to tell stories about their students. Because without those examples, presented in a very professional, legal, and factual manner it’s difficult for policymakers to really understand the impact of what they’re putting into place.”

“Unintended bias is never an intentional thing—it tends to come from lack of exposure to the use case.”

5. Practice test taking with accommodations in place.

Practice tests help students know what to expect and practice using their accommodations in a situation that may not be familiar to them.

Jamie Rainwater with students using headphones and laptops

“Massachusetts has online practice tests where students can practice using their accommodations, and I’m able to sit with the kids and really reinforce test taking skills.”

Your students may benefit from doing the same.

Embedded accommodations make for a simpler test-taking experience, which also means test scores more accurately reflect student knowledge. With most students using one-to-one Chromebooks during assessments, many accommodations are now built-in, including text-to-speech.

Test-taking with accommodations in hand better represents how students use accommodations to access learning throughout the year (we don’t take students’ glasses away when taking a test!).

Now more states are adding additional accommodations, and Illinois has become the newest state that allows embedded word prediction for the very first time in 2022, joining Massachusetts, Rhode Island, and New Mexico in streamlining writing accommodations for test-takers.

Paul from Brookline told us that using embedded word prediction made things easier for students and personnel alike.

While we’re moving ever closer to an environment where embedded accommodations are standard for learners with disabilities, it takes all of our voices to work on getting us there.

Still have questions about testing accommodations?

Join Ruth Ziolkowski, OTR, for this engaging on-demand webinar for more information.

Girl in striped shirt studying on a laptop

To our valued Snap&Read and Co:Writer customers…

Co:writer and/or snap&read customer faq.

Girl in striped shirt studying on a laptop

Customer Love: The New Mexico UDL Project’s Janea Menicucci

Girl in striped shirt studying on a laptop

Customer Love: Meet Emily Boyett, An Inclusion Champion and Changemaker in Oklahoma

Girl in striped shirt studying on a laptop

“Embrace dyslexia. Try to learn more about it.”

Girl in striped shirt studying on a laptop

5 Tips for Increasing Snap&Read and Co:Writer Use in Your District

Girl in striped shirt studying on a laptop

13 Apps for Executive Function Deficits

Girl in striped shirt studying on a laptop

4 Ways Snap&Read Builds Self Efficacy (and why that’s important)

Girl in striped shirt studying on a laptop

A Harvard-educated SEL Consultant’s Best Tips for Schools

U.S.Flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

The United States Department of Justice

U.S. Department of Justice Civil Rights Division

  • M, W, F: 9:30am - 12pm and 3pm - 5:30pm ET
  • Tu: 12:30pm - 5:30pm ET, Th: 2:30pm - 5:30pm ET

Rule on Accessibility of Medical Diagnostic Equipment Under Title II Published

Learn about the rule's requirements

Table of contents

Ada requirements: testing accommodations.

Last updated: February 28, 2020

Guidance & Resources

Read this to get specific guidance about this topic.

  • For a beginner-level introduction to a topic, view Topics
  • For information about the legal requirements, visit Law, Regulations & Standards

Standardized examinations and other high-stakes tests are gateways to educational and employment opportunities. Whether seeking admission to a high school, college, or graduate program, or attempting to obtain a professional license or certification for a trade, it is difficult to achieve such goals without sitting for some kind of standardized exam or high-stakes test. While many testing entities have made efforts to ensure equal opportunity for individuals with disabilities, the Department continues to receive questions and complaints relating to excessive and burdensome documentation demands, failures to provide needed testing accommodations, and failures to respond to requests for testing accommodations in a timely manner.

The Americans with Disabilities Act (ADA) ensures that individuals with disabilities have the opportunity to fairly compete for and pursue such opportunities by requiring testing entities to offer exams in a manner accessible to persons with disabilities. When needed testing accommodations are provided, test-takers can demonstrate their true aptitude.

The Department of Justice (Department) published revised final regulations implementing the ADA for title II (State and local government services) and title III (public accommodations and commercial facilities) on September 15, 2010. These rules clarify and refine issues that have arisen over the past 20 years and contain new and updated requirements.

This publication provides technical assistance on testing accommodations for individuals with disabilities who take standardized exams and other high-stakes tests. It addresses the obligations of testing entities, which include private, state, or local government entities that offer exams related to applications, licensing, certification, or credentialing for secondary (high school), postsecondary (college and graduate school), professional (law, medicine, etc.), or trade (cosmetology, electrician, etc.) purposes. Who is entitled to testing accommodations, what types of testing accommodations must be provided, and what documentation may be required of the person requesting testing accommodations are also discussed.

What Kinds Of Tests Are Covered?

Exams administered by any private, state, or local government entity related to applications, licensing, certification, or credentialing for secondary or postsecondary education, professional, or trade purposes are covered by the ADA and testing accommodations, pursuant to the ADA, must be provided. 1

Examples of covered exams include:

  • High school equivalency exams (such as the GED);
  • High school entrance exams (such as the SSAT or ISEE);
  • College entrance exams (such as the SAT or ACT);
  • Exams for admission to professional schools (such as the LSAT or MCAT);
  • Admissions exams for graduate schools (such as the GRE or GMAT); and
  • Licensing exams for trade purposes (such as cosmetology) or professional purposes (such as bar exams or medical licensing exams, including clinical assessments).

What Are Testing Accommodations?

Testing accommodations are changes to the regular testing environment and auxiliary aids and services 2 that allow individuals with disabilities to demonstrate their true aptitude or achievement level on standardized exams or other high-stakes tests.

Examples of the wide range of testing accommodations that may be required include:

  • Braille or large-print exam booklets;
  • Screen reading technology;
  • Scribes to transfer answers to Scantron bubble sheets or record dictated notes and essays;
  • Extended time;
  • Wheelchair-accessible testing stations;
  • Distraction-free rooms;
  • Physical prompts (such as for individuals with hearing impairments); and
  • Permission to bring and take medications during the exam (for example, for individuals with diabetes who must monitor their blood sugar and administer insulin).

Who Is Eligible To Receive Testing Accommodations?

Individuals with disabilities are eligible to receive necessary testing accommodations. Under the ADA, an individual with a disability is a person who has a physical or mental impairment that substantially limits a major life activity (such as seeing, hearing, learning, reading, concentrating, or thinking) or a major bodily function (such as the neurological, endocrine, or digestive system). The determination of whether an individual has a disability generally should not demand extensive analysis and must be made without regard to any positive effects of measures such as medication, medical supplies or equipment, low-vision devices (other than ordinary eyeglasses or contact lenses), prosthetics, hearing aids and cochlear implants, or mobility devices. However, negative effects, such as side effects of medication or burdens associated with following a particular treatment regimen, may be considered when determining whether an individual’s impairment substantially limits a major life activity.

A substantial limitation of a major life activity may be based on the extent to which the impairment affects the condition, manner, or duration in which the individual performs the major life activity. To be “substantially limited” in a major life activity does not require that the person be unable to perform the activity. In determining whether an individual is substantially limited in a major life activity, it may be useful to consider, when compared to most people in the general population, the conditions under which the individual performs the activity or the manner in which the activity is performed. It may also be useful to consider the length of time an individual can perform a major life activity or the length of time it takes an individual to perform a major life activity, as compared to most people in the general population. For example:

  • The condition or manner under which an individual who has had a hand amputated performs manual tasks may be more cumbersome, or require more effort or time, than the way most people in the general population would perform the same tasks.
  • The condition or manner under which someone with coronary artery disease performs the major life activity of walking would be substantially limited if the individual experiences shortness of breath and fatigue when walking distances that most people could walk without experiencing such effects.
  • A person whose back or leg impairment precludes him or her from sitting for more than two hours without significant pain would be substantially limited in sitting, because most people can sit for more than two hours without significant pain.

A person with a history of academic success may still be a person with a disability who is entitled to testing accommodations under the ADA. A history of academic success does not mean that a person does not have a disability that requires testing accommodations. For example, someone with a learning disability may achieve a high level of academic success, but may nevertheless be substantially limited in one or more of the major life activities of reading, writing, speaking, or learning, because of the additional time or effort he or she must spend to read, write, speak, or learn compared to most people in the general population.

What Testing Accommodations Must Be Provided?

Testing entities must ensure that the test scores of individuals with disabilities accurately reflect the individual’s aptitude or achievement level or whatever skill the exam or test is intended to measure. A testing entity must administer its exam so that it accurately reflects an individual’s aptitude, achievement level, or the skill that the exam purports to measure, rather than the individual’s impairment (except where the impaired skill is one the exam purports to measure). 3

  • Example: An individual may be entitled to the use of a basic calculator during exams as a testing accommodation. If the objective of the test is to measure one’s ability to solve algebra equations, for example, and the ability to perform basic math computations (e.g., addition, subtraction, multiplication, and division), is secondary to the objective of the test, then a basic calculator may be an appropriate testing accommodation. If, however, the objective of the test is to measure the individual’s understanding of, and ability to perform, math computations, then it likely would not be appropriate to permit a calculator as a testing accommodation.

What Kind Of Documentation Is Sufficient To Support A Request For Testing Accommodations?

All testing entities must adhere to the following principles regarding what may and may not be required when a person with a disability requests a testing accommodation.

  • Documentation. Any documentation if required by a testing entity in support of a request for testing accommodations must be reasonable and limited to the need for the requested testing accommodations. Requests for supporting documentation should be narrowly tailored to the information needed to determine the nature of the candidate’s disability and his or her need for the requested testing accommodation. Appropriate documentation will vary depending on the nature of the disability and the specific testing accommodation requested.

Examples of types of documentation include:

  • Recommendations of qualified professionals;
  • Proof of past testing accommodations;
  • Observations by educators;
  • Results of psycho-educational or other professional evaluations;
  • An applicant’s history of diagnosis; and
  • An applicant’s statement of his or her history regarding testing accommodations.

Depending on the particular testing accommodation request and the nature of the disability, however, a testing entity may only need one or two of the above documents to determine the nature of the candidate’s disability and his or her need for the requested testing accommodation. If so, a testing entity should generally limit its request for documentation to those one or two items and should generally evaluate the testing accommodation request based on those limited documents without requiring further documentation.

  • Past Testing Accommodations. Proof of past testing accommodations in similar test settings is generally sufficient to support a request for the same testing accommodations for a current standardized exam or other high-stakes test .

Past Testing Accommodations on Similar Standardized Exams or High-Stakes Tests. If a candidate requests the same testing accommodations he or she previously received on a similar standardized exam or high-stakes test, provides proof of having received the previous testing accommodations, and certifies his or her current need for the testing accommodations due to disability, then a testing entity should generally grant the same testing accommodations for the current standardized exam or high-stakes test without requesting further documentation from the candidate. So, for example, a person with a disability who receives a testing accommodation to sit for the SAT should generally get the same testing accommodation to take the GRE, LSAC, or MCAT.

Formal Public School Accommodations. If a candidate previously received testing accommodations under an Individualized Education Program (IEP) 3 or a Section 504 Plan, 4 he or she should generally receive the same testing accommodations for a current standardized exam or high-stakes test. If a candidate shows the receipt of testing accommodations in his or her most recent IEP or Section 504 Plan, and certifies his or her current need for the testing accommodations due to disability, then a testing entity should generally grant those same testing accommodations for the current standardized exam or high-stakes test without requesting further documentation from the candidate. This would include students with disabilities publicly-placed and funded in a private school under the IDEA or Section 504 placement procedures whose IEP or Section 504 Plan addresses needed testing accommodations.

Example. Where a student with a Section 504 Plan in place since middle school that includes the testing accommodations of extended time and a quiet room is seeking those same testing accommodations for a high-stakes test, and certifies that he or she still needs those testing accommodations, the testing entity receiving such documentation should generally grant the request.

Private School Testing Accommodations. If a candidate received testing accommodations in private school for similar tests under a formal policy, he or she should generally receive the same testing accommodations for a current standardized exam or high-stakes test. Testing accommodations are generally provided to a parentally-placed private school student with disabilities pursuant to a formal policy and are documented for that particular student. If a candidate shows a consistent history of having received testing accommodations for similar tests, and certifies his or her current need for the testing accommodations due to disability, then a testing entity should generally grant those same testing accommodations for the current standardized exam or high-stakes test without requesting further documentation from the candidate.

Example. A private school student received a large-print test and a scribe as testing accommodations on similar tests throughout high school pursuant to a formal, documented accommodation policy and plan. Where the student provides documentation of receiving these testing accommodations, and certifies that he or she still needs the testing accommodations due to disability, a testing entity should generally grant the candidate’s request for the same testing accommodations without requesting further documentation.

First Time Requests or Informal Classroom Testing Accommodations. An absence of previous formal testing accommodations does not preclude a candidate from receiving testing accommodations. Candidates who are individuals with disabilities and have never previously received testing accommodations may also be entitled to receive them for a current standardized exam or high-stakes test. In the absence of documentation of prior testing accommodations, testing entities should consider the entirety of a candidate’s history, including informal testing accommodations, to determine whether that history indicates a current need for testing accommodations.

Example. A high school senior is in a car accident that results in a severe concussion. The report from the treating specialist says that the student has post-concussion syndrome that may take up to a year to resolve, and that while his brain is healing he will need extended time and a quiet room when taking exams. Although the student has never previously received testing accommodations, he may nevertheless be entitled to the requested testing accommodations for standardized exams and high-stakes tests as long as the post-concussion syndrome persists.

Example. A student with a diagnosis of ADHD and an anxiety disorder received informal, undocumented testing accommodations throughout high school, including time to complete tests after school or at lunchtime. In support of a request for extended time on a standardized exam, the student provides documentation of her diagnoses and their effects on test-taking in the form of a doctor’s letter; a statement explaining her history of informal classroom accommodations for the stated disabilities; and certifies that she still needs extended time due to her disabilities. Although the student has never previously received testing accommodations through an IEP, Section 504 Plan, or a formal private school policy, she may nevertheless be entitled to extended time for the standardized exam.

Qualified Professionals. Testing entities should defer to documentation from a qualified professional who has made an individualized assessment of the candidate that supports the need for the requested testing accommodations. Qualified professionals are licensed or otherwise properly credentialed and possess expertise in the disability for which modifications or accommodations are sought. Candidates who submit documentation (such as reports, evaluations, or letters) that is based on careful consideration of the candidate by a qualified professional should not be required by testing entities to submit additional documentation. A testing entity should generally accept such documentation and provide the recommended testing accommodation without further inquiry.

Reports from qualified professionals who have evaluated the candidate should take precedence over reports from testing entity reviewers who have never conducted the requisite assessment of the candidate for diagnosis and treatment. This is especially important for individuals with learning disabilities because face-to-face interaction is a critical component of an accurate evaluation, diagnosis, and determination of appropriate testing accommodations.

  • A qualified professional’s decision not to provide results from a specific test or evaluation instrument should not preclude approval of a request for testing accommodations where the documentation provided by the candidate, in its entirety, demonstrates that the candidate has a disability and needs a requested testing accommodation. For example, if a candidate submits documentation from a qualified professional that demonstrates a consistent history of a reading disorder diagnosis and that recommends the candidate receive double time on standardized exams based on a personal evaluation of the candidate, a testing entity should provide the candidate with double time. This is true even if the qualified professional does not include every test or subtest score preferred by the testing entity in the psychoeducational or neuropsychological report.

How Quickly Should A Testing Entity Respond To A Request For Testing Accommodations?

A testing entity must respond in a timely manner to requests for testing accommodations so as to ensure equal opportunity for individuals with disabilities. Testing entities should ensure that their process for reviewing and approving testing accommodations responds in time for applicants to register and prepare for the test. 6 In addition, the process should provide applicants with a reasonable opportunity to respond to any requests for additional information from the testing entity, and still be able to take the test in the same testing cycle. Failure by a testing entity to act in a timely manner, coupled with seeking unnecessary documentation, could result in such an extended delay that it constitutes a denial of equal opportunity or equal treatment in an examination setting for persons with disabilities.

How Should Testing Entities Report Test Scores for Test-Takers Receiving Disability-Related Accommodations?

Testing entities should report accommodated scores in the same way they report scores generally. Testing entities must not decline to report scores for test-takers with disabilities receiving accommodations under the ADA.

Flagging policies that impede individuals with disabilities from fairly competing for and pursuing educational and employment opportunities are prohibited by the ADA. “Flagging” is the policy of annotating test scores or otherwise reporting scores in a manner that indicates the exam was taken with a testing accommodation. Flagging announces to anyone receiving the exam scores that the test-taker has a disability and suggests that the scores are not valid or deserved. Flagging also discourages test-takers with disabilities from exercising their right to testing accommodations under the ADA for fear of discrimination. Flagging must not be used to circumvent the requirement that testing entities provide testing accommodations for persons with disabilities and ensure that the test results for persons with disabilities reflect their abilities, not their disabilities.

To view model testing accommodation practices and for more information about the ADA, please visit our website or call our toll-free number:

  • ADA Website: www.ADA.gov
  • ADA Information Line : 800-514-0301 (Voice) and 1-833-610-1264 (TTY); M, Tu, W, F: 9:30am - 12pm and 3pm - 5:30pm ET, Th: 2:30pm - 5:30pm ET
  • Model Testing Accommodation Practices Resulting From Recent Litigation: http://archive.ada.govlsac_best_practices_report.docx

This document does not address how the requirements or protections, as applicable, of Title II of the ADA, Section 504 of the Rehabilitation Act, the assessment provisions in the Elementary and Secondary Education Act (ESEA) and the Individuals with Disabilities Education Act (IDEA), and their implementing regulations, apply to, or interact with, the administration of state-wide and district-wide assessments to students with disabilities conducted by public educational entities. Back to text

See 28 C.F.R. §§ 36.303(b), 36.309(b)(3) (providing non-exhaustive lists of auxiliary aids and services). Back to text

Under Section 309 of the ADA, any person (including both public and private entities) that offers examinations related to applications, licensing, certification, or credentialing for secondary or postsecondary education, professional, or trade purposes must offer such examinations “in a place and manner accessible to persons with disabilities or offer alternative accessible arrangements for such individuals.”  42 U.S.C. § 12189.  Under regulations implementing this ADA provision, any private entity that offers such examinations must “assure that the examination is selected and administered so as to best ensure that, when the examination is administered to an individual with a disability that impairs sensory, manual, or speaking skills, the examination results accurately reflect the individual´s aptitude or achievement level or whatever other factor the examination purports to measure, rather than reflecting the individual´s impaired sensory, manual, or speaking skills (except where those skills are the factors that the examination purports to measure).”  28 C.F.R. § 36.309.  Likewise, under regulations implementing title II of the ADA, public entities offering examinations must ensure that their exams do not provide qualified persons with disabilities with aids, benefits, or services that are not as effective in affording equal opportunity to obtain the same result, to gain the same benefit, or to reach the same level of achievement as that provided to others, 28 C.F.R. § 35.130(b)(1)(iii), and may not administer a licensing or certification program in a manner that subjects qualified individuals with disabilities to discrimination on the basis of disability.  28 C.F.R. § 35.130(b)(6).  Both the title II and title III regulations also require public and private testing entities to provide modifications and auxiliary aids and services for individuals with disabilities unless the entity can demonstrate an applicable defense.  28 C.F.R. §§ 35.130(b)(7), 35.160(b), 35.164; 28 C.F.R. §§ 36.309(b)(1)(iv-vi), (b)(2), 36.309(b)(3).  Back to text

An IEP contains the special education and related services and supplementary aids and services provided to an eligible student with a disability under Part B of the IDEA, 20 U.S.C. §§ 1400 et seq . and 34 C.F.R. part 300. Back to text

A Section 504 Plan could contain the regular or special education and related aids and services provided pursuant to section 504 of the Rehabilitation Act of 1973, 29 U.S.C. § 794 and 34 C.F.R. part 104. Back to text

Testing entities must offer examinations to individuals with disabilities in as timely a manner as offered to others and should not impose earlier registration deadlines on those seeking testing accommodations. Back to text

For persons with disabilities, this publication is available in alternate formats.

Duplication of this document is encouraged.

The Americans with Disabilities Act authorizes the Department of Justice (the Department) to provide technical assistance to individuals and entities that have rights or responsibilities under the Act. This document provides informal guidance to assist you in understanding the ADA and the Department’s regulations.

This guidance document is not intended to be a final agency action, has no legally binding effect, and may be rescinded or modified in the Department’s complete discretion, in accordance with applicable laws. The Department’s guidance documents, including this guidance, do not establish legally enforceable responsibilities beyond what is required by the terms of the applicable statutes, regulations, or binding judicial precedent.

Originally issued: September 08, 2015

You are now leaving a Department of Justice website

You will be automatically redirected to a:

The Department of Justice does not endorse the organizations or views represented by this site and takes no responsibility for, and exercises no control over, the accuracy, accessibility, copyright or trademark compliance or legality of the material contained on this site.

  • Integrations
  • Learning Center

MoSCoW Prioritization

What is moscow prioritization.

MoSCoW prioritization, also known as the MoSCoW method or MoSCoW analysis, is a popular prioritization technique for managing requirements. 

  The acronym MoSCoW represents four categories of initiatives: must-have, should-have, could-have, and won’t-have, or will not have right now. Some companies also use the “W” in MoSCoW to mean “wish.”

What is the History of the MoSCoW Method?

Software development expert Dai Clegg created the MoSCoW method while working at Oracle. He designed the framework to help his team prioritize tasks during development work on product releases.

You can find a detailed account of using MoSCoW prioritization in the Dynamic System Development Method (DSDM) handbook . But because MoSCoW can prioritize tasks within any time-boxed project, teams have adapted the method for a broad range of uses.

How Does MoSCoW Prioritization Work?

Before running a MoSCoW analysis, a few things need to happen. First, key stakeholders and the product team need to get aligned on objectives and prioritization factors. Then, all participants must agree on which initiatives to prioritize.

At this point, your team should also discuss how they will settle any disagreements in prioritization. If you can establish how to resolve disputes before they come up, you can help prevent those disagreements from holding up progress.

Finally, you’ll also want to reach a consensus on what percentage of resources you’d like to allocate to each category.

With the groundwork complete, you may begin determining which category is most appropriate for each initiative. But, first, let’s further break down each category in the MoSCoW method.

Start prioritizing your roadmap

Moscow prioritization categories.

Moscow

1. Must-have initiatives

As the name suggests, this category consists of initiatives that are “musts” for your team. They represent non-negotiable needs for the project, product, or release in question. For example, if you’re releasing a healthcare application, a must-have initiative may be security functionalities that help maintain compliance.

The “must-have” category requires the team to complete a mandatory task. If you’re unsure about whether something belongs in this category, ask yourself the following.

moscow-initiatives

If the product won’t work without an initiative, or the release becomes useless without it, the initiative is most likely a “must-have.”

2. Should-have initiatives

Should-have initiatives are just a step below must-haves. They are essential to the product, project, or release, but they are not vital. If left out, the product or project still functions. However, the initiatives may add significant value.

“Should-have” initiatives are different from “must-have” initiatives in that they can get scheduled for a future release without impacting the current one. For example, performance improvements, minor bug fixes, or new functionality may be “should-have” initiatives. Without them, the product still works.

3. Could-have initiatives

Another way of describing “could-have” initiatives is nice-to-haves. “Could-have” initiatives are not necessary to the core function of the product. However, compared with “should-have” initiatives, they have a much smaller impact on the outcome if left out.

So, initiatives placed in the “could-have” category are often the first to be deprioritized if a project in the “should-have” or “must-have” category ends up larger than expected.

4. Will not have (this time)

One benefit of the MoSCoW method is that it places several initiatives in the “will-not-have” category. The category can manage expectations about what the team will not include in a specific release (or another timeframe you’re prioritizing).

Placing initiatives in the “will-not-have” category is one way to help prevent scope creep . If initiatives are in this category, the team knows they are not a priority for this specific time frame. 

Some initiatives in the “will-not-have” group will be prioritized in the future, while others are not likely to happen. Some teams decide to differentiate between those by creating a subcategory within this group.

How Can Development Teams Use MoSCoW?

  Although Dai Clegg developed the approach to help prioritize tasks around his team’s limited time, the MoSCoW method also works when a development team faces limitations other than time. For example: 

Prioritize based on budgetary constraints.

What if a development team’s limiting factor is not a deadline but a tight budget imposed by the company? Working with the product managers, the team can use MoSCoW first to decide on the initiatives that represent must-haves and the should-haves. Then, using the development department’s budget as the guide, the team can figure out which items they can complete. 

Prioritize based on the team’s skillsets.

A cross-functional product team might also find itself constrained by the experience and expertise of its developers. If the product roadmap calls for functionality the team does not have the skills to build, this limiting factor will play into scoring those items in their MoSCoW analysis.

Prioritize based on competing needs at the company.

Cross-functional teams can also find themselves constrained by other company priorities. The team wants to make progress on a new product release, but the executive staff has created tight deadlines for further releases in the same timeframe. In this case, the team can use MoSCoW to determine which aspects of their desired release represent must-haves and temporarily backlog everything else.

What Are the Drawbacks of MoSCoW Prioritization?

  Although many product and development teams have prioritized MoSCoW, the approach has potential pitfalls. Here are a few examples.

1. An inconsistent scoring process can lead to tasks placed in the wrong categories.

  One common criticism against MoSCoW is that it does not include an objective methodology for ranking initiatives against each other. Your team will need to bring this methodology to your analysis. The MoSCoW approach works only to ensure that your team applies a consistent scoring system for all initiatives.

Pro tip: One proven method is weighted scoring, where your team measures each initiative on your backlog against a standard set of cost and benefit criteria. You can use the weighted scoring approach in ProductPlan’s roadmap app .

2. Not including all relevant stakeholders can lead to items placed in the wrong categories.

To know which of your team’s initiatives represent must-haves for your product and which are merely should-haves, you will need as much context as possible.

For example, you might need someone from your sales team to let you know how important (or unimportant) prospective buyers view a proposed new feature.

One pitfall of the MoSCoW method is that you could make poor decisions about where to slot each initiative unless your team receives input from all relevant stakeholders. 

3. Team bias for (or against) initiatives can undermine MoSCoW’s effectiveness.

Because MoSCoW does not include an objective scoring method, your team members can fall victim to their own opinions about certain initiatives. 

One risk of using MoSCoW prioritization is that a team can mistakenly think MoSCoW itself represents an objective way of measuring the items on their list. They discuss an initiative, agree that it is a “should have,” and move on to the next.

But your team will also need an objective and consistent framework for ranking all initiatives. That is the only way to minimize your team’s biases in favor of items or against them.

When Do You Use the MoSCoW Method for Prioritization?

MoSCoW prioritization is effective for teams that want to include representatives from the whole organization in their process. You can capture a broader perspective by involving participants from various functional departments.

Another reason you may want to use MoSCoW prioritization is it allows your team to determine how much effort goes into each category. Therefore, you can ensure you’re delivering a good variety of initiatives in each release.

What Are Best Practices for Using MoSCoW Prioritization?

If you’re considering giving MoSCoW prioritization a try, here are a few steps to keep in mind. Incorporating these into your process will help your team gain more value from the MoSCoW method.

1. Choose an objective ranking or scoring system.

Remember, MoSCoW helps your team group items into the appropriate buckets—from must-have items down to your longer-term wish list. But MoSCoW itself doesn’t help you determine which item belongs in which category.

You will need a separate ranking methodology. You can choose from many, such as:

  • Weighted scoring
  • Value vs. complexity
  • Buy-a-feature
  • Opportunity scoring

For help finding the best scoring methodology for your team, check out ProductPlan’s article: 7 strategies to choose the best features for your product .

2. Seek input from all key stakeholders.

To make sure you’re placing each initiative into the right bucket—must-have, should-have, could-have, or won’t-have—your team needs context. 

At the beginning of your MoSCoW method, your team should consider which stakeholders can provide valuable context and insights. Sales? Customer success? The executive staff? Product managers in another area of your business? Include them in your initiative scoring process if you think they can help you see opportunities or threats your team might miss. 

3. Share your MoSCoW process across your organization.

MoSCoW gives your team a tangible way to show your organization prioritizing initiatives for your products or projects. 

The method can help you build company-wide consensus for your work, or at least help you show stakeholders why you made the decisions you did.

Communicating your team’s prioritization strategy also helps you set expectations across the business. When they see your methodology for choosing one initiative over another, stakeholders in other departments will understand that your team has thought through and weighed all decisions you’ve made. 

If any stakeholders have an issue with one of your decisions, they will understand that they can’t simply complain—they’ll need to present you with evidence to alter your course of action.  

Related Terms

2×2 prioritization matrix / Eisenhower matrix / DACI decision-making framework / ICE scoring model / RICE scoring model

Prioritizing your roadmap using our guide

Talk to an expert.

Schedule a few minutes with us to share more about your product roadmapping goals and we'll tailor a demo to show you how easy it is to build strategic roadmaps, align behind customer needs, prioritize, and measure success.

Share on Mastodon

presentation mode testing accommodations

loading

  • Enroll & Pay
  • Prospective Students
  • Current Students
  • Degree Programs

Response Assessment Accommodations

What are response assessment accommodations.

Response accommodations allow students to respond to test questions in different ways or to solve or organize a response using some type of assistive device or organizer. Responses to test questions in alternate formats need to be carefully copied onto a standard answer form before submitting the test for scoring. Most states have detailed instructions in their test administration manuals for how this is to be done. Some states do not allow particular response accommodations on some tests or parts of tests.

Who can benefit from response assessment accommodations?

Response accommodations can benefit students with physical, sensory, or learning disabilities (including difficulties with memory, sequencing, directionality, alignment, and organization).

How are specific response assessment accommodations administered?

Express response to a scribe through speech, sign language, pointing or by using an assistive communication device A scribe is someone who writes down a student's answers to test questions. If it is a writing test, the scribe might write a person's essay as the test taker dictates it through speech, sign language, or by using some type of assistive communication device. The test taker is responsible for telling the scribe where to place punctuation marks, for indicating sentences and paragraphs, and for spelling certain words. There is a lot of skill involved in using a scribe for writing extended responses, skill that requires extensive practice and competence in the classroom. A scribe may also be used on multiple choice tests to fill in the "bubbles" on an answer sheet or to write short answers. A single message communication device (e.g., BIG mack) may be used to express responses to a scribe. 

A scribe may not edit or alter student responses in any way and must record word-for-word exactly what the student has dictated. Scribes should request clarification from the student about the use of punctuation, capitalization, and the spelling of key words, and must allow the student to review and edit what the scribe has written. 

A person who serves as a scribe needs to be carefully prepared prior to testing to assure that he/she knows the vocabulary involved in the testing process and understands the boundaries of the assistance to be provided. In general, the role of the scribe is to write what is dictated, no more and no less. 

Type on or speak to word processor 

This option may increase the independence of the test taker and reduce the number of trained scribes needed on test day. Research has found that students who submit better essay tests on computers than hand writing are students who are very familiar with computers and have good keyboarding skills. Assistive technology that can be used for typing includes sticky keys, touch screen, trackball, mouth or headstick or other pointing device, and customized keyboards. 

Speech-to-text conversion or voice recognition allows a student to use his or her voice as an input device. Voice recognition may be used to dictate text into the computer or to give commands to the computer (such as opening application programs, pulling down menus, or saving work). Older voice recognition applications require each word to be separated by a distinct space. This allows the machine to determine where one word begins and the next stops. This style of dictation is called discrete speech. Continuous speech voice recognition applications allow students to dictate text fluently into the computer. These new applications can recognize speech at up to 160 words per minute. While these systems do give students system control, they are not yet hands free. 

Type on Bailler  A Brailler is a Braille keyboard used for typing text that can then be printed in standard print or Braille (embosser). The Brailler is similar to a typewriter or computer keyboard. Paper is inserted into the Brailler, and multiple keys are pressed at once, creating an entire cell with each press. Through an alternative computer port, newer Braillers can simultaneously act as a speech synthesizer that reads the text displayed on the screen when paired with a screen reading program. 

Speak into tape recorder Student responses are recorded on a tape recorder for later verbatim transcription by another person. Students using this accommodation need to be tested in a private setting with adult supervision. 

Write in test booklet instead of on answer sheet This accommodation allows the test-taker to indicate responses directly in the test booklet and have someone transfer the answer to the answer sheet after the student has completed the test. Bubbled answer sheets (electronic scanning forms) may be difficult or impossible for some students to complete accurately and neatly. A student may have difficulty finding the right place to respond on a bubble sheet. 

Monitor placement of student responses on answer sheet Students who are able to use bubbled answer sheets may benefit from having an adult simply monitor the placement of their responses to ensure that they are actually responding to the intended question.

Materials or devices used to solve or organize responses

Calculation devices If a student's disability affects math calculation but not reasoning, he or she may request to use a calculator or other assistive device (e.g., number chart, arithmetic table, manipulatives or abacus). It is important to determine whether the use of a calculation device is a matter of convenience or a necessary accommodation. Several tests allow the use of calculation devices for at least a portion of a test for all students, not just those with disabilities. Some states allow students with disabilities to use a calculation device on portions of a test where use is prohibited by the general population of students. 

It is important to know what is being tested before making decisions about the use of calculation devices. For example if a test item is measuring subtraction with regrouping, using a calculator would not give a student an opportunity to show regrouping. On the other hand, if an item is testing problem solving skills and the problem includes subtraction (e.g., bargain shopping for items with a better value) it may not be necessary for a student to show whether regrouping has been mastered, making the use of a calculation device a valid accommodation. 

Calculators may be adapted with large keys. Calculators with voice output (talking calculators) are also available and need to be used in individual settings to keep from distracting other students. 

An abacus may be useful for students when mathematics problems are to be calculated without a calculator. The abacus functions as paper and pencil for students with visual impairments. 

Spelling and grammar assistive devices The use of a dictionary may be allowed on tests that require an extended response or essay. Spelling and grammar can also be checked with pocket spellcheckers. Students enter an approximate spelling and then see or hear the correct spelling or correct use of the word. Students who respond using a word processor may be allowed to use a spell-check or other electronic spelling device. Some states require spell-check and grammar-checking devices to be turned off for writing tests. 

Visual organizers Visual organizers include templates, highlighters, place markers, scratch paper, and graph paper. Some states do not allow any marks to be made in the test booklet except as a specific accommodation because the booklets are passed from school to school and reused. In some states, all students are allowed to use highlighters, underline words, and write in margins. Some states allow students to use scratch paper or graph paper to align numbers. 

Graphic organizers Graphic organizers help students arrange information into patterns in order to organize responses and stay focused on the content. Graphic organizers are especially helpful on tests where students are expected to write an extended response or essay.

banner-in1

How To Prioritise Requirements With The MoSCoW Technique

Home Blog Agile How To Prioritise Requirements With The MoSCoW Technique

Play icon

On most projects, we talk about requirements and features that are either in scope or out of scope. But to manage those requirements effectively we also have to prioritize them. And this is where the MoSCoW  technique comes in. In addition, you can read more about the acceptance criteria here.

Let me explain what M, S, C, and W stand for.

  • M is a must-have requirement. Something that’s essential to the project and that’s not negotiable.
  • S  is a should-have requirement. Something we need in the project if at all possible.
  • C  stands for could-have. Something that’s nice to have in case we have extra time and budget.

Why Use MoSCoW Technique for Requirement Prioritization?

Using the MoSCoW technique gives us a more granular view of what is in or out of the scope of the project, and it helps us deliver the most important requirements to the customer first. In other words, it helps you to manage your client’s expectations. And as you will come to see, the MoSCoW technique can also be used to delegate work and to be explicit about what needs to get done and what doesn't need to get done.

Why MOSCOW technique

How to Use MoSCoW Technique for Requirement Prioritization?

Let us look at an example of how to use the technique in practice. I would like you to imagine that your job is to project manage an upcoming conference. This is a yearly conference where delegates will come to network and hear industry experts talk about sustainability in project management.

As you meet with the organization behind the event, i.e. your client, you ask them what their must-have requirements are for the conference. You are curious to know everything you must deliver to them for them to be satisfied. Your client responds that the event must be held at an indoor venue within five kilometers of the city center and that it must be within the allocated budget. It must be able to host 150 people and it must have facilities to serve lunch.

You then ask your client what there should be at the event if at all possible. They answer that you should arrange for three speakers in the morning and three speakers in the afternoon. All of them should be recognized within the industry, if at all possible. In addition, you should make time for the delegates to network with each other during lunch, and lunch should, ideally, be a sit-down affair with hot food. Finally, each delegate should receive a goodie bag upon arrival.

You furthermore enquire with your client what there could be at the event. i.e. what are some nice-to-have requirements, which you could incorporate? You’re not promising to deliver those requirements but in case you have extra time and budget you can look into it. It turns out that your client would like to have a famous sports or businessperson open the conference. But it’s not essential and only possible if the budget allows it. They also think that it would be nice with a panel discussion on sustainability at some point after lunch, but it isn’t essential.

You finally ask them what there will not be at this event, i.e. which requirements are firmly out of scope. Your client answers that there will not be multiple tracks of speakers and that there will not be any alcohol served at any point during the day. They also specify that this year there won’t be a second day of in-depth workshops taking place. Using the MoSCoW technique in this way to categorize all the project’s requirements is a very user-friendly method, which your client will be able to easily understand. Initially, your client may say that everything is a must-have requirement, but when you explain that must-have requirements come with a price tag they will understand that they can’t have everything unless they increase the budget and give you more time to deliver it. When you plan your project and put together the project plan, only include the must-have and should-have items. This is what you’re promising to deliver. You’re not promising to deliver the could-have items. They can go on a separate wish list. Also, take care to properly document the will-not-have requirements. You may think that you can forget about them because they are out of scope. But, it’s necessary to document them as you may have to refer back to them later.

Example of using the MoSCoW technique to describe features of a requirement

What I really like about the MoSCoW technique is that you can also use it at a more detailed level to describe the features of a requirement. Let’s say for example that you have delegated the goodie bag task to one of your team members. That’s the little bag each participant will receive when they arrive at the venue and which normally contains a few freebies. It’s the team member’s job to gather the detailed requirements for the goodie bag and to physically produce it. As you’re delegating the task, the team members would like to know what your expectations are and what they must deliver to you at the end. You should explain to them all the information required clearly, such as:

  • The requirements (M): There must be 150 goodie bags Each bag must contain a copy of the event program and Bag as well as the event program must be made out of recyclable materials
  • The deliverables (S): There should be two free branded items inside, such as a pen and paper, if at all possible.
  • Furthermore, explain that (C): The bag could contain something sweet, like mints, but only if a suitable sponsor is found. The bag could also contain a small bottle of water as a nice to have.
  • Finally specify that (W): The bags will not contain any alcohol and the combined weight will not be more than one kg.
Unleash your potential with PMP certification. Enroll in our PMP certification class and become a project management expert.

Whose Responsibility is to Prioritize?

Business Analysts are mainly responsible to take up the most complex requirements and break down them into simple tasks that can be implemented by anyone. But, BA alone can’t do the prioritization alone. He/she needs to bring several stakeholders into the process and get their approval on the requirements priority. It is essential for BA to understand the dependencies between the requirements before prioritizing them.

Benefits of using the MoSCoW technique for Business Analysts

The BA can make use of any prioritization techniques to prioritize the requirements thoroughly. But, MoSCoW technique is the most effective one to use among all the other prioritization techniques available. Some of the benefits of using MoSCoW technique for Business Analysts is shown in the figure below.

MOSCOW Benefits

Drawbacks of MoSCoW Prioritization

MoSCoW is a priority for many product and development teams, although the strategy may have some drawbacks. Here are a few instances:  

Sometimes tasks are assigned to the wrong categories due to inconsistent scoring. MoSCoW is frequently criticized for its lack of an impartial approach to comparing initiatives to one another. Your team will need to apply this process to your analysis. The MoSCoW technique only functions if your team adopts a standard scoring methodology across all efforts.  

Items could end up in the wrong categories if all pertinent stakeholders are not included. To decide which of your team's activities are absolutely necessary for your product and which are merely desirable, you'll need as much information as possible.  For instance, you might require feedback from a member of your sales staff regarding the significance (or lack thereof) of a proposed new feature to potential customers. Without feedback from all pertinent stakeholders, your team may choose poorly where to place each endeavor, which is a drawback of the MoSCoW technique.  

Team bias can reduce MoSCoW's efficacy by supporting (or opposing) efforts. Your team members might be influenced by their opinions on particular initiatives because MoSCoW lacks a framework for objectively evaluating initiatives. Using MoSCoW prioritizing carries the danger that a team may erroneously think that MoSCoW is an impartial way to evaluate the things on their list.   

They discuss one initiative, determine it should have been done instead, and then move on to the next. On the other hand, your team will need a consistent and impartial system for rating every proposal. The only way to lessen your team's prejudices in favor of or against things is to do this.  

As we can see that we can prioritize requirements with MoSCoW technique at a high level but also at a low level to specify the detailed requirements, or features, of a product. When you use it at a low level it also helps you to delegate tasks better to team members and to set expectations. Are you ready to give it a go?

Frequently Asked Questions (FAQs)

Step 1: Identify the purpose and prioritization strategy. 

Step 2: Make a list of the customer's requirements. 

Step 3: Make a list of the requirements. 

Step 4: Assist in the rating of requirements for interrelationships. 

Step 5: Identify technical and developmental factors. 

Step 6: Establish a priority rating. 

MoSCoW is a prioritization technique used to prioritize backlog items. It is a method of categorizing items in a backlog, usually into the Must-Have, Should-Have, Could-Have, and Won't-Have categories. It helps to identify which items are of the highest priority and which can be left out or completed later. 

MoSCoW analysis is a method used to prioritize tasks and goals within a business. It helps businesses succeed by allowing them to focus their resources and efforts on the most important tasks and goals while still providing flexibility to adapt to changing circumstances. It helps businesses prioritize their activities and allocate resources efficiently, allowing them to achieve their objectives promptly. 

Profile

Susanne Madsen

Susanne Madsen is an internationally recognised project leadership coach, trainer and consultant. She is the author of The Project Management Coaching Workbook and The Power of Project Leadership. Working with organisations globally she helps project managers step up and become better leaders.

Prior to setting up her own business, Susanne worked for almost 20 years in the corporate sector leading high-profile programmes of up to $30 million for organisations such as Standard Bank, Citigroup and JPMorgan Chase. She is a fully qualified Corporate and Executive coach, accredited by DISC and a regular contributor to the Association for Project Management (APM).

Susanne is also the co-founded The Project Leadership Institute, which is dedicated to building authentic project leaders by engaging the heart, the soul and the mind.

Avail your free 1:1 mentorship session.

Something went wrong

Upcoming Agile Management Batches & Dates

NameDateFeeKnow more

Useful Links

  • Selenium Essentials training in Philadelphia
  • Hadoop Administration course online in Boston
  • Hadoop Administration classroom training in Visakhapatnam
  • PSM (Professional Scrum Master) classes in Stockholm
  • Puppet classes in Markham
  • Practical Scrum Master classes in Indore
  • PMI-ACP training online in Oslo
  • Icagile Agile Testing Icp Tst classes in Brussels
  • Tips for Managing Stakeholder Expectations

Course advisor icon

presentation mode testing accommodations

  • You are here:
  • Agile Business Consortium
  • DSDM Project Framework

Chapter 10: MoSCoW Prioritisation

Previous  chapter: 9  Workshops

10.1 Introduction

In a DSDM project where time has been fixed, it is vital to understand the relative importance of the work to be done in order to make progress and keep to deadlines. Prioritisation can be applied to requirements/User Stories, tasks, products, use cases, acceptance criteria and tests, although it is most commonly applied to requirements/ User Stories. (User Stories are a very effective way of defining requirements in an Agile style; see later chapter on Requirements and User Stories for more information.) MoSCoW is a prioritisation technique for helping to understand and manage priorities. The letters stand for:

  • S hould Have
  • C ould Have
  • W on’t Have this time

The use of MoSCoW works particularly well on projects. It also overcomes the problems associated with simpler prioritisation approaches which are based on relative priorities:

  • The use of a simple high, medium or low classification is weaker because definitions of these priorities are missing or need to be defined. Nor does this categorization provide the business with a clear promise of what to expect. A categorisation with a single middle option, such as medium, also allows for indecision
  • The use of a simple sequential 1,2,3,4… priority is weaker because it deals less effectively with items of similar importance. There may be prolonged and heated discussions over whether an item should be one place higher or lower

The specific use of Must Have, Should Have, Could Have or Won’t Have this time provides a clear indication of that item and the expectations for its completion.  

10.2 The MoSCoW Rules

10.2.1 Must Have

These provide the Minimum Usable SubseT (MUST) of requirements which the project guarantees to deliver. These may be defined using some of the following:

  • No point in delivering on target date without this; if it were not delivered, there would be no point deploying the solution on the intended date
  • Not legal without it
  • Unsafe without it
  • Cannot deliver a viable solution without it

Ask the question ‘what happens if this requirement is not met?’ If the answer is ‘cancel the project – there is no point in implementing a solution that does not meet this requirement’, then it is a Must Have requirement. If there is some way around it, even if it is a manual and painful workaround, then it is a Should Have or a Could Have requirement. Categorising a requirement as a Should Have or Could Have does not mean it won’t be delivered; simply that delivery is not guaranteed.  

10.2.2 Should Have

Should Have requirements are defined as:

  • Important but not vital
  • May be painful to leave out, but the solution is still viable
  • May need some kind of workaround, e.g. management of expectations, some inefficiency, an existing solution, paperwork etc. The workaround may be just a temporary one

One way of differentiating a Should Have requirement from a Could Have is by reviewing the degree of pain caused by the requirement not being met, measured in terms of business value or numbers of people affected.  

10.2.3 Could Have

Could Have requirements are defined as:

  • Wanted or desirable but less important
  • Less impact if left out (compared with a Should Have)

These are the requirements that provide the main pool of contingency, since they would only be delivered in their entirety in a best case scenario. When a problem occurs and the deadline is at risk, one or more of the Could haves provide the first choice of what is to be dropped from this timeframe.  

10.2.4 Won’t Have this time

These are requirements which the project team has agreed will not be delivered (as part of this timeframe). They are recorded in the Prioritised Requirements List where they help clarify the scope of the project. This avoids them being informally reintroduced at a later date. This also helps to manage expectations that some requirements will simply not make it into the Deployed Solution, at least not this time around. Won’t Haves can be very powerful in keeping the focus at this point in time on the more important Could Haves, Should Haves and particularly the Must Haves.

10.3 MoSCoW Relating to a Specific Timeframe

In a traditional project, all requirements are treated as Must Have, since the expectation is set from the start that everything will be delivered and that typically time (the end date) will slip if problems are encountered. DSDM projects have a very different approach; fixing time, cost and quality and negotiating features. By the end of Foundations, the end dates for the project and for the first Project Increment are confirmed. In order to meet this commitment to the deadline, DSDM projects need to create contingency within the prioritised requirements. Therefore the primary focus initially is to create MoSCoW priorities for the project. However, when deciding what to deliver as part of the Project Increment, the next focus will be to agree MoSCoW priorities for that Increment. So at this point, a requirement may have two priorities; MoSCoW for the project and MoSCoW for the Increment. Finally, when planning a specific Timebox (at the start of each Timebox) the Solution Development Team will allocate a specific priority for the requirements for this Timebox. At this point, the majority of requirements are Won’t Have (for this Timebox). Only requirements that the Solution Development Team plan to work on in the development timebox are allocated a Must Have, Should Have or Could Have priority. Therefore requirements may have three levels of priority:

  • MoSCoW for the project
  • MoSCoW for the Project Increment
  • MoSCoW for this Timebox

For example:

Even if a Must Have requirement for an IT solution is the facility to archive old data, it is very likely that the solution could be used effectively for a few months without this facility being in place. In this case, it is sensible to make the archive facility a Should Have or a Could Have for the first Project Increment even though delivery of this facility is a Must Have before the end of the project. Similarly, a Must Have requirement for a Project Increment may be included as a Should Have or a Could Have (or a Won't Have) for an early Timebox.

It is important that the bigger picture objectives (completion of the Project Increment and delivery of the project) are not forgotten when working at the Timebox level. One simple way to deal with this is to create a separate Timebox PRL, a subset of the project PRL that is specifically associated with an individual Timebox and leave the priorities unchanged on the main PRL for the project.  

10.4 Ensuring effective prioritisation

10.4.1 Balancing the priorities

When deciding the effort allocated for Must Have requirements, remember that anything other than a Must Have is, to some degree, contingency, since the Must Haves define the Minimum Usable SubseT which is guaranteed to be delivered.

DSDM recommends:

  • Getting the percentage of project/Project Increment Must Haves (in terms of effort to deliver) to a level where the team’s confidence to deliver them is high – typically no more than 60% Must Have effort
  • Agreeing a pool of Could Haves for the project/Project Increment that reflects a sensible level of contingency - typically around 20% Could Have effort. Creating a sensible pool of Could Haves sets the correct expectations for the business from the start – that these requirements/User Stories may be delivered in their entirety in a best case scenario, but the primary project/Project Increment focus will always be on protecting the Must Haves and Should Haves

This spread of priorities provides enough contingency to ensure confidence in a successful project outcome. NB When calculating effort for a timeframe, Won’t Haves (for this timeframe) are excluded. DSDM’s recommendations reflect a typical project scenario. The important thing to make MoSCoW work is to have some visible flexibility in the level of requirements which must be delivered. The safe percentage of Must Have requirements, in order to be confident of project success, is not to exceed 60% Must Have effort.

10a_-_moscow_-balancing_prio 1.png

Figure 10a: MoSCoW – balancing priorities

Levels of Must Have effort above 60% introduce a risk of failure, unless the team are working in a project where all of these criteria are true:

  • Estimates are known to be accurate
  • The approach is very well understood
  • The team are “performing” (based on the Tuckman model)
  • The environment is understood and low-risk in terms of the potential for external factors to introduce delays

In some circumstances the percentage of Must Have effort may be significantly less than 60%. However this can be used to the benefit of the business, by providing the greatest possible flexibility to optimise value delivered across a larger proportion of Should Haves. The exact split of effort between Musts, Shoulds, and Coulds is down to each project team to agree, although DSDM also recommends creating a sensible pool of Could Haves, typically around 20% of the total effort. Effective MoSCoW prioritisation is all about balancing risk and predictability for each project.  

10.4.2 Agreeing up front how priorities will work

DSDM defines what the different priorities mean – the MoSCoW Rules. But whereas the definition of a Must Have is not negotiable, the difference between a Should Have and a Could Have can be quite subjective. It is very helpful if the team agree, at the start of their project, how these lower level priorities will be applied. Understanding in advance some objective criteria that separate a Should Have from a Could Have and ensuring that all roles on the project buy into what has been agreed can avoid much heated discussion later. Look for defined boundaries that decide whether a requirement is a Should Have or a Could Have?

For example:

At what point does the number of people impacted raise a Could Have to a Should Have? Or, what value of benefits would justify dropping this requirement from a Should Have to a Could Have?

Ideally this agreement is reached before the requirements are captured.   

10.4.3 When to prioritise

very item of work has a priority. Priorities are set before work commences and the majority of this prioritisation activity happens during Foundations. However, priorities should be kept under continual review as work is completed. As new work arises, either through introduction of a new requirement or through the exposure of unexpected work associated with existing requirements, the decision must be made as to how critical it is to the success of the current work using the MoSCoW rules. When introducing new requirements, care needs to be taken not to increase the percentage of Must Have requirement effort beyond the agreed project level. The priorities of uncompleted requirements should be reviewed throughout the project to ensure that they are still valid. As a minimum, they should be reviewed at the end of each Timebox and each Project Increment.  

10.4.4 Discussing and reviewing priorities

Any requirement defined as a Must Have will, by definition, have a critical impact on the success of the project. The Project Manager, Business Analyst and any other member of the Solution Development Team should openly discuss requirements prioritised as Must Have where they are not obvious Must Haves (“Without this would we cancel the project/increment?”); it is up to the Business Visionary or their empowered Business Ambassador to explain why a requirement is a Must Have. The escalation of decision-making processes should be agreed early on, e.g. Business Ambassador and Business Analyst to Business Visionary to Business Sponsor, and the level of empowerment agreed around decision-making at each level. At the end of a Project Increment, all requirements that have not been met are re-prioritised in the light of the needs of the next Increment. This means that, for instance, a Could Have that is not met in one Increment may be reclassified subsequently as a Won’t Have for the next Increment, because it does not contribute enough towards the business needs to justify its inclusion. However, it could just as easily become a Must Have for the next Increment, if its low priority in the first Increment was based on the fact it was simply not needed in the first Solution Increment.  

10.5 Using MoSCoW to Manage Business Expectations

The MoSCoW rules have been defined in a way that allows the delivery of the Minimum Usable SubseT of requirements to be guaranteed. Both the Solution Development Team and those to whom they are delivering share this confidence because the high percentage effort of Shoulds and Coulds provides optimum contingency to ensure delivery of the Must Haves. The business roles can certainly expect more than delivery of only the Must Haves. The Must Haves are guaranteed but it is perfectly reasonable for the business to expect delivery of more than the Minimum Usable SubseT in the timeframe, except under the most challenging of circumstances. DSDM’s recommendation to create a sensible pool of Could Have contingency – typically around 20% of the total project/increment effort - identifies requirements that are less important or which have less impact if not delivered, in order to protect the more important requirements. This approach implies that the business can reasonably expect the Should Have requirements to be met, in addition to all of the Must Haves. It also implies that in a best case scenario, the Could Have requirements would also be delivered. The Solution Development Team cannot have the confidence to guarantee delivery of all the Must Have, Should Have and Could Have requirements, even though these have all been estimated and are included in the plan. This is because the plan is based on early estimates and on requirements which have not yet been analysed in low-level detail. Applying pressure to a team to guarantee delivery of Musts, Shoulds and Coulds is counter-productive. It usually results in padded estimates which give a false perception of success. “We always achieve 100% (because we added significant contingency to our figures”). So, combining sensible prioritisation with timeboxing leads to predictability of delivery and therefore greater confidence. This also protects the quality of the solution being delivered. Keeping project metrics to show the percentage of Should Haves and Could Haves delivered on each Project Increment or Timebox will either re-enforce this confidence, if things are going well, or provide an early warning of problems, highlighting that some important (but not critical) requirements may not be met at the project level.  

10.6 How does MoSCoW Relate to the Business Vision

10.6.1 The Business Sponsor’s perspective

The starting point for all projects is the business vision. Associated with the business vision are a set of prioritised requirements that contribute to delivery of the vision. Also associated with the business vision is a Business Case that describes the project in terms of what value it will deliver back to the business. Depending on the organization, this Business Case may be an informal understanding or it may be defined formally, showing what Return On Investment (ROI) is expected in order to justify the cost of the project. The MoSCoW priorities are necessary to understand the Minimum Usable SubseT and the importance of individual requirements. The Business Visionary must ensure that the requirements are prioritised, evaluated in business terms, and delivered to provide the ROI required by the Business Case, in line with the business vision.  

10.7 Making MoSCoW Work

Requirements are identified at various levels of detail, from a high-level strategic viewpoint (typically during Feasibility) through to a more detailed, implementable level (typically during Evolutionary Development). Highlevel requirements can usually be decomposed to yield a mix of sub-requirements, which can then be prioritised individually. This ensures the flexibility is maintained, so that if necessary, some of the detailed less important functionality can be dropped from the delivered solution to protect the project deadline. It is this decomposition that can help resolve one of the problems that often confront teams: that all requirements appear to be Must Haves. If all requirements were genuinely Must Haves, then the flexibility derived from the MoSCoW prioritisation would no longer work. There would be no lower priority requirements to be dropped from the deliverables to keep a project on time and budget. This goes against the DSDM ethos of fixing time and cost and flexing features (the triangles diagram in the Philosophy and Fundamentals chapter). Believing everything is a Must Have is often symptomatic of insufficient decomposition of requirements. Remember that team members may cause scope creep by working on ”interesting” things rather than the important things. MoSCoW can help avoid this.  

10.8 Tips for Assigning Priorities

1. Ensure that the business roles, in particular the Business Visionary and the Business Analyst, are fully up to speed as to why and how DSDM prioritises requirements.

2. Consider starting with all requirements as Won’t Haves, and then justify why they need to be given a higher priority.

3. For each requirement that is proposed as a Must Have, ask: ‘what happens if this requirement is not met?’ If the answer is ‘cancel the project; there is no point in implementing a solution that does not meet this requirement’, then it really is a Must Have. If not, then decide whether it is Should Have or a Could Have (or even a Won’t Have this time)

4. Ask: ‘if I come to you the night before Deployment and tell you there is a problem with a Must Have requirement and that we can’t deliver it – will you stop the Deployment?’ If the answer is ‘yes’ then this is a Must Have requirement. If not, decide whether it is Should Have or a Could Have.

5. Is there a workaround, even if it is a manual one? If a workaround exists, then it is not a Must Have requirement. When determining whether this is a Should Have or a Could Have requirement, compare the cost of the workaround with the cost of delivering the requirement, including the cost of any associated delays and any additional cost to implement it later, rather than now.

6. Ask why the requirement is needed – for this project and this Project Increment.

7. Is this requirement dependent on any others being fulfilled? A Must Have cannot depend on the delivery of anything other than a Must Have because of the risk of a Should Have or Could Have not being delivered.

8. Allow different priorities for acceptance criteria of a requirement.

For example:

'The current back-up procedures need to ensure that the service can be restored as quickly as possible.' How quick is that? Given enough time and money, that could be within seconds. A smarter definition would be to say it Should happen within four hours, but it Must happen within 24 hours.

9. Can this requirement be decomposed? Is it necessary to deliver each of these elements to fulfil the requirement? Are the decomposed elements of the same priority as each other? 10. Tie the requirement to a project objective. If the objective is not a Must Have, then probably neither is the requirement relating to it. 11. Does the priority change with time? For example, for an initial release a requirement is a Should Have, but it will become a Must Have for a later release. 12. Prioritise testing, using MoSCoW. 13. Use MoSCoW to prioritise your To Do list. It can be used for activities as well as requirements.   

10.9 Summary

MoSCoW (Must Have, Should Have, Could Have, Won’t Have this time) is primarily used to prioritise requirements, although the practice is also useful in many other areas. On a typical project, DSDM recommends no more than 60% effort for Must Have requirements on a project, and a sensible pool of Could Haves, usually around 20% effort. Anything higher than 60% Must Have effort poses a risk to the success and predictability of the project, unless the environment and any technology is well understood, the team is well established and the external risks minimal.

Next chapter: 11 Iterative Development

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

How do soundboard-trained dogs respond to human button presses? An investigation into word comprehension

Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing

Affiliations Department of Cognitive Science, University of California, San Diego, San Diego, California, United States of America, Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States of America

ORCID logo

Roles Conceptualization, Investigation, Methodology

Affiliations College of Arts and Sciences, Canisius College, Buffalo, New York, United States of America, FluentPet, Inc, San Diego, California, United States of America

Roles Investigation, Methodology, Project administration, Resources, Software

Affiliations Department of Cognitive Science, University of California, San Diego, San Diego, California, United States of America, Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States of America, FluentPet, Inc, San Diego, California, United States of America

Roles Formal analysis, Visualization, Writing – review & editing

Affiliation Department of Linguistics, University of California, Davis, Davis, California, United States of America

Roles Formal analysis, Writing – review & editing

Affiliations FluentPet, Inc, San Diego, California, United States of America, Statistics and Operational Research Department, Universitat de València, Valencia, Spain

Roles Conceptualization, Methodology, Writing – review & editing

Affiliations Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna, Vienna, Austria, School of Psychology and Neuroscience, University of St. Andrews, St. Andrews, United Kingdom

Roles Investigation

Affiliation Department of Cognitive Science, University of California, San Diego, San Diego, California, United States of America

Affiliations Department of Cognitive Science, University of California, San Diego, San Diego, California, United States of America, FluentPet, Inc, San Diego, California, United States of America

Roles Conceptualization, Writing – review & editing

Roles Conceptualization, Methodology, Project administration, Resources, Supervision, Writing – review & editing

* E-mail: [email protected]

  • Amalia P. M. Bastos, 
  • Ashley Evenson, 
  • Patrick M. Wood, 
  • Zachary N. Houghton, 
  • Lucas Naranjo, 
  • Gabriella E. Smith, 
  • Alexandria Cairo-Evans, 
  • Lisa Korpos, 
  • Jack Terwilliger, 

PLOS

  • Published: August 28, 2024
  • https://doi.org/10.1371/journal.pone.0307189
  • Peer Review
  • Reader Comments

Table 1

Past research on interspecies communication has shown that animals can be trained to use Augmentative Interspecies Communication (AIC) devices, such as soundboards, to make simple requests of their caretakers. The recent uptake in AIC devices by hundreds of pet owners around the world offers a novel opportunity to investigate whether AIC is possible with owner-trained family dogs. To answer this question, we carried out two studies to test pet dogs’ ability to recognise and respond appropriately to food-related, play-related, and outside-related words on their soundboards. One study was conducted by researchers, and the other by citizen scientists who followed the same procedure. Further, we investigated whether these behaviours depended on the identity of the person presenting the word (unfamiliar person or dog’s owner) and the mode of its presentation (spoken or produced by a pressed button). We find that dogs produced contextually appropriate behaviours for both play-related and outside-related words regardless of the identity of the person producing them and the mode in which they were produced. Therefore, pet dogs can be successfully taught by their owners to associate words recorded onto soundboard buttons to their outcomes in the real world, and they respond appropriately to these words even when they are presented in the absence of any other cues, such as the owner’s body language.

Citation: Bastos APM, Evenson A, Wood PM, Houghton ZN, Naranjo L, Smith GE, et al. (2024) How do soundboard-trained dogs respond to human button presses? An investigation into word comprehension. PLoS ONE 19(8): e0307189. https://doi.org/10.1371/journal.pone.0307189

Editor: Brenton G. Cooper, Texas Christian University, UNITED STATES OF AMERICA

Received: April 18, 2024; Accepted: July 1, 2024; Published: August 28, 2024

Copyright: © 2024 Bastos et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Study pre-registration is deposited in the Open Science Framework, https://osf.io/tskcq Data and analysis scripts are available from Github, https://github.com/znhoughton/comprehension .

Funding: A.P.M.B. is supported by the Johns Hopkins Provost’s Postdoctoral Fellowship Program.

Competing interests: A.P.M.B., P.M.W., Z.N.H., J. T., G.E.S., & L.K. have previously consulted for FluentPet, Inc., a company that produces AIC devices for pets. A.E. & L.N. are employees of FluentPet, Inc.

Introduction

The use of Augmentative Interspecies Communication (AIC) devices has gained popularity among pet owners in recent years, especially among dog owners [ 1 – 3 ]. Most of these AIC devices consist of soundboards with buttons that, when pressed, produce pre-recorded words in the owner’s voice. Most owners report that they train their dogs using a technique that has become known as modelling: owners demonstrate the actions associated with each button repeatedly until their animals make presses themselves, at which point the same actions are performed by the owners, regardless of whether the animal’s press was intentional or accidental [ 2 ]. Over time, animals may press the buttons on their soundboards more frequently, which may in turn provide them with more control over their daily lives and environments.

Although interspecies communication through soundboard-like AIC devices has been adopted to varying degrees of success with apes, dolphins, and other species (for a review, [ 1 ]), this has primarily been undertaken by researchers and professional animal trainers. In the scientific literature, only a single dog named Sofia has been shown to use buttons to request actions such as going on a walk or playing, following training by a professional dog trainer [ 4 , 5 ]. The training methods used with Sofia are considerably different from the modelling approach that is most common among pet owners: Sofia was first taught to tap her AIC device with her paw on command, which was eventually phased out once she began doing so spontaneously [ 4 ]. Given the nature of Sofia’s training, it is also possible that Sofia experienced less cueing from her trainer than other owner-trained pet dogs, whose performance may be more susceptible to the Clever Hans effect [ 6 , 7 ].

Crucially, communicative production of words, such as that demonstrated by Sofia, must be preceded by the comprehension of said words. Although Sofia is the only dog ever shown to produce requests through an AIC device, many more dogs are capable of responding appropriately to spoken human signals. Most pet dogs are trained to respond to at least a few vocal signals from their owners such as “sit” and “lie down” [ 8 , 9 ], but some dogs such as Rico and Chaser can learn tens or hundreds of names for individual objects ([ 10 – 17 ]; although learning object names may be more challenging to dogs than learning words associated with actions, see [ 18 ]). Given that owner-trained soundboard-using dogs have not yet been tested in controlled experimental contexts, we currently do not have any evidence to suggest that they have successfully associated the words produced by their soundboards to their respective consequences, let alone that they produce said words communicatively.

Another potential issue surrounding owner-trained soundboard-using dogs concerns the reliability of citizen science data [ 19 ]. Although thousands of pet owners currently contribute regular data on their animals’ soundboard use to scientific research [ 2 ], the extent to which reported interactions are cued, inaccurately reported, or cherry picked by owners is unclear and a matter under current investigation. Consequently, testing soundboard-trained dogs under controlled experimental conditions is a necessary step towards assessing this form of interspecies communication.

Given the novelty of this approach, the nature of most of the data on this form of interspecies communication, and our limited knowledge of dogs’ capacity to acquire associations in this context, in the present study we pursue a thorough and multifaceted investigation into the comprehension of button presses by humans. We begin from the premise that owner-trained soundboard-using dogs are probably more likely to associatively learn the words for routine activities or events (such as learning that the word “outside”, when spoken or pressed by a human, is usually followed by a door being opened to the backyard) than some of the more abstract words sometimes provided on owner-trained dogs’ soundboards (such as “tomorrow” or “later”). Given that dogs’ soundboards are determined by their owners and therefore are composed of buttons with different words, and have different layouts (for a detailed explanation, see [ 2 ]), here we select three of the most commonly occurring routine-related word concepts across all participants (namely, food-related, play-related, and outside-related words) and choose the most common variants within these concepts (OUT or OUTSIDE for outside-related words, PLAY or TOY for play-related words, and FOOD or EAT or DINNER or HUNGRY for food-related words) for our experiments (as pre-registered in the Open Science Framework: https://osf.io/tskcq ). Over the course of two complementary experiments, we use these three word concepts, as well as a nonsense word (the nonce word “DAXING”) to answer five fundamental questions about owner-trained soundboard-using dogs’ comprehension of the words on their soundboards.

First, using both a researcher-led in-person experiment and a remotely conducted citizen science experiment, we test whether dogs can recognise and respond appropriately to the words recorded onto their soundboards. We hypothesise that, if dogs have associatively learnt the connection between these words and the outcomes they usually entail, then they should behave in anticipation of the actions or events indicated by these words. For example, upon hearing the word “PLAY”, dogs should be more likely to walk to their toy box and pick up a toy, than if they had heard “FOOD”. Conversely, we should see fewer looks to a food bowl following the word “PLAY” than following “FOOD”. Given that these words might typically be produced by owners concurrently with many other contextual cues (for example, “FOOD” might be produced most commonly at specific mealtimes, while the owner fills up the dog’s bowl), in our experimental trials we strip all words of any additional contextual cues, to ensure that dogs’ responses occur specifically in response to the words themselves.

Second, we use the two experiments to compare dogs’ responses to a word when it is produced by the owner and by an unfamiliar person (a researcher). We hypothesise that, if dogs expect words to precede their accompanying actions, and can do so without any additional contextual or owner-provided cues, then they should respond similarly regardless of the person producing the word. Given that words on dogs’ buttons usually consist of recordings of their owner’s speech (as was the case for every dog participating in this study), and that differences in the acoustic properties of different people’s speech can conceivably generate differences in dogs’ behavioural responses unrelated to their comprehension of words, we compare their responses to button presses by the owner and presses of those same buttons by an unfamiliar person.

Third, a comparison between the two experiments–particularly in terms of dogs’ responses to button presses by their owners and an unfamiliar person–can also inform whether the results of citizen science studies on soundboard-trained dogs are broadly comparable to in-person researcher-led studies, and therefore establish whether future work with this population could be carried out through data collected remotely by citizen scientists.

Fourth, we also compare dogs’ responses to owners’ button presses and owners’ vocal production of these same words. If dogs respond appropriately and equivalently to words regardless of their mode of production (whether they are spoken directly by the owner or produced by a button press), then this would suggest that dogs are associating words’ consequences (e.g., the actions following words, such as a play interaction following “PLAY”) to the words (speech or speech recordings) themselves, rather than attending primarily to other cues such as the location of the buttons (which would predict contextually-appropriate responses when buttons are pressed, but not when the same words are spoken out loud). On the other hand, if dogs respond appropriately to owners’ button presses of words but not to their vocal production, then it is likely that dogs associate button locations–or some other property of the buttons–to their accompanying actions or events, disregarding button audio.

Finally, because of recent research suggesting that dogs tilt their heads in response to familiar words more so than unfamiliar ones [ 20 ], and that this is a lateralized behaviour, we investigated subjects’ head tilts throughout the two experiments. We hypothesise that, if dogs do in fact tilt their heads in response to words that they recognise, and soundboard-trained dogs recognise the words recorded onto their buttons, then we should see more head-tilting behaviour in response to the known words (i.e., outside-related, play-related, and food-related words) than to a nonce word (“DAXING”). This result should hold regardless of whether words are produced by button presses or spoken by their owner. If this is true, then this study can also provide a larger sample size to inform whether this head-tilting behaviour is more commonly left-lateralized or right-lateralized, or, as in the original study, this is split evenly across subjects. If, on the other hand, we find evidence to suggest that soundboard-trained dogs do respond appropriately to known words but do not tilt their heads upon hearing them, this might instead suggest that there are population or breed differences between soundboard-trained dogs and the “gifted world learner” [ 15 ] dogs in the study conducted by Sommese and colleagues [ 20 ].

In sum, the present study aims to investigate five principal questions: (1) whether dogs can recognise and respond appropriately to the words recorded on their soundboards; (2) whether they exhibit these responses even in the absence of other contextual or owner-produced cues, even when they are produced by an unfamiliar person; (3) whether citizen science studies and in-person studies can produce comparable results in this population of subjects; (4) whether dogs attend to speech specifically in this context, or other cues; and (5) whether owner-trained soundboard-using dogs tilt their heads in response to familiar words in the same way that dogs trained to recognise object names do.

Experiment 1: In person study

For the in-person study (IPS), subjects were 30 dogs (14 males, age: M = 3.52 years, SD = 2.91; see Table 1 for subject information). No subjects were excluded from analyses. All dogs were family pets trained by their owners to use soundboard AIC devices, and whose soundboards contained the words OUT/OUTSIDE, PLAY/TOY, and FOOD/EAT/DINNER/HUNGRY. Owners were asked to habituate their dogs to people wearing sunglasses, hats, and other head accessories prior to the study date, in their own time, given that the study would involve owners wearing multiple accessories on their faces. Dogs were tested in their home settings with their owners present; owners were not told the purpose of the study until testing was concluded. All subjects lived primarily indoors and were fed indoors and the study received ethics approval from the UCSD IACUC (protocol no. S21098).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Age at time of testing is provided in years, rounded to the nearest decimal point, as per owner reports. For mixed breed dogs, up to three primary component breeds (based on owner reports or genetic testing) are provided in brackets.

https://doi.org/10.1371/journal.pone.0307189.t001

The study was conducted in dogs’ homes, in the room where their soundboard was typically located. Upon arrival, one researcher (Experimenter 1, hereafter E1) waited outside the house whilst the other (Experimenter 2, hereafter E2) went into the home and greeted the owner and their dog, before placing the dog either in the backyard or in a room in the house other than the one where the soundboard was located.

Once the dog was out of sight, E2 placed three large coloured stickers over the three buttons containing one of each of the words of interest: (1) “OUT” or “OUTSIDE”; (2) “PLAY” or “TOY”; and (3) “FOOD” or “EAT” or “DINNER” or “HUNGRY”. This ensured that, at test, E1 could not tell the buttons’ identity based on any writing or symbols visible on the buttons. E2 then asked that the owner record their voice onto a new button (that matched the size and style of the dog’s existing soundboard buttons) for the word “DAXING” and placed that on the soundboard, with another coloured sticker. Sticker colours were randomly assigned to buttons between subjects, such that E1 would not have known the identity of any buttons based on the colour of the stickers placed on them. Stickers were made from green, red, yellow, and blue cardstock paper and adhesive putty. Owners were shown a novel action–placing their hands on their head and spinning in a slow circle–which they were told they would have to perform at some point later in the study.

The dog was then brought back into the room where their soundboard was located, and E1 was invited into the home and allowed to greet the dog and owner. E2 then set up the camera recording equipment (Panasonic Full HD Camcorder HC-V180K, GoPro Hero7) in the room, and an Anmeate Baby Monitor facing the soundboard, such that multiple camera angles of the room were recorded ( Fig 1 ).

thumbnail

The door used for the button concept “OUTSIDE” is outlined in blue, the dog’s toys are outlined in orange, and the food bowl is outlined in green. During trials for this subject, for example, the owner sat in the reclining chair on the bottom left of the frame, facing away from the soundboard, while wearing noise-cancelling headphones, a face mask (as a Covid-19 precaution), and a black sleep mask.

https://doi.org/10.1371/journal.pone.0307189.g001

While E1 explained the experimental procedure to the owner and ensured the dog was comfortable and behaving normally in their presence, E2 identified a different room in the house to hide so they could remotely monitor the study (through a live feed of the baby monitor). Next, the owner was asked to wear a sleep mask and noise-cancelling headphones and ensure that the dog was not fearful of either accessory. Finally, E2 handed E1 their headphones and set both headphone volumes such that that neither E1 nor the dog’s owner could hear E2 speaking loudly from a short distance, therefore ensuring that they would also be unable to hear the recordings on any of the buttons when they were pressed during a trial.

Trials began with E1 and the owner sitting in the soundboard room whilst the dog ranged freely around the room. The owner wore a sleep mask and listened to music on noise-cancelling headphones so they could not see any of the trial procedures. This ensured that owners were blind to the study’s hypotheses and predictions but were still present in the room to ensure that dogs were comfortable and behaving normally.

At the start of each trial, E2 remotely triggered a sound file to E1’s headphones stating the colour of the sticker for the button they would press in that trial, then remotely started playing music into the headphones of both E1 and the dog’s owner. E1 looked straight ahead until they heard a beep in their headphones, again triggered via Bluetooth by E2. At this point, E1 stood up and began the trial: E1 walked to the same place by the soundboard each time, pressed the button covered by the sticker colour they were given, stepped away from the button again, and looked in the direction of the dog. E1 froze in this position for 1 minute while they waited for the next beep in their headphones, at which point they turned around to face the nearest wall and closed their eyes. This final beep and turn concluded the trial.

Following the end of the trial, E2 triggered an audio file in the owner’s headphones stating that they could remove their headphones and sleep mask and gave the owner instructions on an action to perform. Unbeknownst to the owner, this action always matched the button recently pressed by E1. For “OUT” or “OUTSIDE”, the owner was asked to let their dog outside as they normally would (e.g., open the door to the backyard so the dog could go out). For “PLAY” or “TOY”, the owner was asked to briefly engage their dog in play with a toy. For “FOOD”, “EAT”, “DINNER”, or “HUNGRY”, the owner was asked to place a small amount of their dog’s usual food in their food bowl. For “DAXING”, the owner performed the novel action they practiced earlier, namely placing both hands on the top of their head and spinning in a slow circle. While the owner performed their action, E1 remained with their eyes closed, listening to music through noise-cancelling headphones, so that they would not be unblinded to the button they just pressed. Once the owner performed their action, E2 came out of their hiding place and asked the owner to return the room to its starting state (e.g., closing any doors that were opened, placing any toys back where they were at the start of the trial), and returning to their starting position in the room. E2 then walked over to E1 and tapped them on the shoulder, so that E1 knew they could remove their headphones, open their eyes, and turn back around.

Experiment 2: Citizen science study

In the citizen science study (CSS), subjects were a new group of 29 dogs (9 males, age: M = 2.95 years, SD = 1.91; see Table 2 for subject information). Another 3 subjects (3 males) completed the study but had their data excluded due to procedural errors, and a further 8 subjects (4 males) were recruited for the study but never completed it. All dogs were owner-trained family pets with soundboards containing the same words as required in Experiment 1. Owners were asked to habituate their dogs to sunglasses prior to beginning the study, since owners were asked to wear mirrored sunglasses throughout all trials. Dogs were tested in their home settings by their owners, who were not told the purpose of the study until testing was concluded. As before, all dogs lived primarily indoors and were fed their meals indoors, and the study received ethics approval from the UCSD IACUC (protocol no. S21098).

thumbnail

https://doi.org/10.1371/journal.pone.0307189.t002

Owners were given study instructions in three formats: an explanatory video demonstrating study procedures; written materials with step-by-step instructions; and a 15-minute zoom call with a researcher who talked them through the study instructions and answered any remaining questions. As with the in-person study, owners were not told the purpose of the experiment until after they concluded their participation and submitted video data of all trials to the research team. Debriefs were also done by an experimenter through another 15-minute zoom call. Given that they administered the experimental trials to their own dogs, owners could not be made blind to conditions.

In each trial, owners either pressed the button for, or spoke, one of the words used in Experiment 1, namely: (1) “OUT” or “OUTSIDE”; (2) “PLAY” or “TOY”; and (3) “FOOD” or “EAT” or “DINNER” or “HUNGRY”, and (4) the nonce word “DAXING”. Each word was produced twice by the owner over the course of the study, once spoken and once pressed. Owners carried out two trials per day, allowing an interval of at least 30 minutes between the two trials within a day, over the course of four days, which were spread over a maximum of 2 weeks. Therefore, dogs in this study experienced a total of eight conditions: a spoken outside word, a pressed outside word, a spoken play word, a pressed play word, a spoken food word, a pressed food word, the word “DAXING” produced by a button press, and “DAXING” spoken by their owner. Trial orders were randomised across subjects.

Before the start of each trial, owners were asked to move their dog to a different room of the house where they could not see their soundboard. Owners then recorded a new button for the word “DAXING” and placed it in an empty slot on their dog’s soundboard. The dog was brought back into the room where their soundboard was located for the start of the trial.

Each trial began with the owner standing in the same location next to the dog’s soundboard. They then either pressed the button or spoke the word specified for that trial. Owners then refrained from interacting with their dog for the next minute; owners were specifically instructed not to speak to, point to, react to, gesture at, or otherwise interact with their dog or their environment for a full minute. At the end of each trial, owners were asked to perform the matching action for the pressed or spoken word, as specified by their participant sheet (i.e., as in Experiment 1, owners were asked to open the door and let their dog into the backyard following the production of the word “OUTSIDE”, and spin slowly with their hands on their heads following the word “DAXING”). Following their post-trial action, owners again took their dogs out of the room and returned the soundboard to its usual configuration, removing the button for the word “DAXING”, such that dogs could not press the “DAXING” button in the intervals between trials and on non-trial days. The full trial time was recorded on video by the owners, sometimes from multiple camera angles.

Coding and analyses.

Videos for both studies (Experiment 1 and Experiment 2) were processed prior to blind coding to ensure that blind coders could never observe the button press produced by E1 (Experiment 1) or the button press or word spoken by the owner (Experiment 2), as well as the post-trial action performed by the owner. Video files were renamed so as to remove any trial information. Videos from different camera angles of the same trial were time-matched and aligned side-by-side into a single video file. E1 and the owner were occluded by a black rectangle such that blind coders could not see any humans on the screen, and the video was cut so that it began following the moment that E1 stood upright after pressing a button and ended 60 seconds later.

Blind coders were naïve to experimental design, experimental hypotheses, and conditions. They were initially trained on a set of videos which included trials of the study piloted with non-eligible dogs and non-study videos crowdsourced online of non-subjects. Coders annotated videos following a pre-determined ethogram ( Table 3 ). Interrater agreement was substantial throughout training (Kappa coefficient of 0.81, with a confidence interval of [0.76; 0.86] for 100% of training set) and remained satisfactory throughout the coding of the full dataset (Kappa coefficient of 0.70, with a confidence interval of [0.64; 0.76] for a randomly selected subset comprising 10% of all data).

thumbnail

We predicted that dogs would show more outside-directed behaviours in response to “OUT” or “OUTSIDE” words, more toy-directed behaviours in response to “PLAY” or “TOY” words, and more food-directed behaviours in response to “FOOD”, “EAT”, “DINNER”, or “HUNGRY” words.

https://doi.org/10.1371/journal.pone.0307189.t003

To determine whether dogs responded as expected to E1’s button presses (in Experiment 1) and the owners’ button presses (in Experiment 2), we investigated whether their behavioural responses to button presses were consistent with individual buttons’ words. For example, upon observing a button press for “OUTSIDE”, a dog should be more likely to move toward the door than upon observing a press for “FOOD”.

The data was analysed using Bayesian linear mixed-effects models implemented in brms [ 21 ] in R [ 22 ]. Specifically, we used a Bernoulli model which requires the dependent variable to be binary. In our case, we ran three separate Bernoulli models, one for each type of behaviour (food-directed behaviours, outside-directed behaviours, and play-directed behaviours). Our reasoning was that if dogs have created associations between these words and their meanings, then they should show the appropriate behaviour associated with that button in the condition.

The dependent variable for each model was whether, on any given trial, the dog showed the target behaviour. For example, in the model for the FOOD condition, the dependent variable was 1 for a given trial if the dog showed a food-related behaviour, and 0 if not (where “trial” here refers to a coded behaviour, so each food-related behaviour would be coded 1 in the FOOD model, and each non-food behaviour would be coded 0). A complete model description can be found in the analysis script (which is provided in full at https://github.com/znhoughton/comprehension ).

The independent variables were the three conditions (Food Condition, Outside Condition, or Play Condition), Experiment (IPS or CSS), and the interaction between the two, with maximal random effects [ 23 ]. For each model, we used weakly informative priors. The model syntax for each model is included below:

  • Model 1: Play behaviours ~ Condition*Experiment + (1 + Condition|Subject)
  • Model 2: Outside behaviours ~ Condition*Experiment + (1 + Condition|Subject)
  • Model 3: Food behaviours ~ Condition*Experiment + (1 + Condition|Subject)

Next, using only data from Experiment 2 (CSS), we tested whether dogs behave comparably in response to a spoken word and a word produced by a button press. To do so, we once again used three Bayesian logistic regression models implemented in brms in R, the same dependent variable as above, but with the fixed-effects of condition and mode (i.e., spoken or pressed; and their interaction), random slopes for condition by subject. This is described in the following three equations:

  • Model 4: Food behaviours ~ Condition*Mode + (1 + Condition*Mode|Subject)
  • Model 5: Outside behaviours ~ Condition*Mode + (1 + Condition*Mode|Subject)
  • Model 6: Play behaviours ~ Condition*Mode + (1 + Condition*Mode|Subject)

Finally, given recent work to suggest head-tilting is a lateralized behaviour which occurs when dogs recognise and process known words [ 18 ], we planned to compare the number of head tilts (to both sides first, then separately for the right and left sides) dogs made in the nonce (“Daxing”) condition compared to the three meaningful known-word conditions (Outside, Play, and Food conditions) in both the researcher-led experiment and the citizen science component of this study. In order to assess this, we set out to use a Bayesian linear mixed-effects model with a Poisson distribution implemented in brms in R, with number of head tilts as the dependent variable and condition and context (known word vs. nonce word) as well as their interaction as fixed-effects and random intercepts for subject and random slopes for condition by subject. Our model specification is listed in the following equation: Number of head tilts ~ Condition * Experiment + (1|Subject) + (Condition|Subject).

All analyses were pre-registered in the Open Science Framework ( https://osf.io/tskcq ).

Question 1: Can dogs recognise and respond appropriate to the words recorded on their soundboards?

Across both experiments, dogs exhibited approximately seven times more play-directed behaviours in the Play Condition ( Table 4 ), and approximately seven times more outside-directed behaviours in the Outside Condition ( Table 5 ), suggesting that they recognised and responded appropriately to these two words. We found no conclusive evidence to suggest that dogs exhibited food-directed behaviours in the Food Condition compared to the other two conditions ( Table 6 ). Note that, since all models used sum coding, the intercept in all tables represents the grand mean, and the coefficient values represent the distance, in log-odds, between the effect and the intercept. Dogs’ behaviours across the three familiar-word conditions of both experiments are shown in Fig 2 .

thumbnail

Proportion here is the number of behaviours of a specific type (e.g., outside behaviours) in a given condition (e.g., play condition) divided by the total number of behaviours in that condition. The x-axis is Condition. The y-axis is the average proportion of each behaviour type across participants. Each of the points represents the proportion of behaviours of the corresponding type of behaviour in the respective condition. The error bars represent standard errors.

https://doi.org/10.1371/journal.pone.0307189.g002

thumbnail

In the Play Condition, the odds of dogs displaying play-directed behaviours was approximately seven times greater than average across the three meaningful-word conditions.

https://doi.org/10.1371/journal.pone.0307189.t004

thumbnail

In the Outside Condition, the odds of dogs displaying outside-directed behaviours was approximately seven times greater than average across the three meaningful-word conditions.

https://doi.org/10.1371/journal.pone.0307189.t005

thumbnail

Dogs were not more likely to perform food-directed behaviours in the Food Condition compared to Play or Outside Conditions.

https://doi.org/10.1371/journal.pone.0307189.t006

Question 2: Do dogs respond equivalently to words produced by their owner compared to an unfamiliar person?

We found no effect of button presser identity (owner or unfamiliar person) on dog’s behaviours for any of the three conditions (see confidence intervals overlapping zero in Tables 5 – 7 for IPS and CSS comparisons). This suggests that dogs respond appropriately to button presses even in the absence of other contextual cues or owner-produced cues.

thumbnail

“Play:Press”, “Food:Press” and “Outside:Press” correspond to the interactions between each condition and mode. For example, the effect of “Play:Press” is the extent to which the effect of “Play” differs when the dog hears a recording from a button press as opposed to a spoken word.

https://doi.org/10.1371/journal.pone.0307189.t007

Question 3: Can citizen science studies and in-person studies of soundboard-trained dogs produce comparable results?

We found no difference in dogs’ behaviours in response to owner-produced and experimenter-produced button presses (see Tables 4 – 6 ), and therefore the results for both experiments were comparable. In the present investigation, dogs behaved similarly in response to button presses in the in-person study (IPS) and the citizen science study (CSS), suggesting that both experiments produced equivalent results.

Question 4: Do soundboard-trained dogs attend to speech?

Within Experiment 2 (IPS), dogs’ responses to spoken and pressed words across the three conditions were comparable for food-related behaviours ( Table 7 ), outside-related behaviours ( Table 8 ), and play-related behaviours ( Table 9 ). Overall, these results suggest that the two modes of word-production led to equivalent behavioural responses by subjects.

thumbnail

https://doi.org/10.1371/journal.pone.0307189.t008

thumbnail

https://doi.org/10.1371/journal.pone.0307189.t009

Question 5: Do soundboard-trained dogs tilt their heads in response to familiar words? Ultimately, our dataset contained too few head-tilting observations (9 instances) for this analysis to be completed, so we cannot ascertain whether head-tilting was more common in meaningful known-word conditions (Play, Food, and Outside Conditions) compared to the nonce word condition (Daxing Condition).

Our study suggests that dogs were more likely to perform play-related behaviours after an experimenter or their owner produced a play-related word, and were more likely to exhibit outside-related behaviours in response to an experimenter or their owner producing an outside-related word. This demonstrates that dogs are, at the very least, capable of learning an association between these words or buttons and their outcomes in the world. On the other hand, dogs did not produce preferentially food-related behaviours in response to the relevant words. It is possible that this may have occurred because several of the dogs taking part in the study may have been satiated before the start of test trials, or because dogs did not expect that they would be served a meal outside of their usual feeding times. Our in-person study was typically conducted during work hours, which do not typically overlap with adult dogs’ mealtimes in the early mornings and/or evenings.

Given that dogs responded equivalently to their owners’ button presses and an unfamiliar person’s button presses, our results also demonstrate that dogs attend to and respond to the buttons or words themselves, rather than behaving solely based on unrelated unintentional cues provided by their owners (e.g., as would be expected from a Clever Hans effect). In both experiments, even when all word-related contextual cues were removed from the human’s interaction with the AIC device, dogs still responded with contextually appropriate behaviours. For example, if dogs typically observe their owners picking up a toy before they press a button for a play-related word, any play-related behaviours the dogs exhibit in turn might occur in response to the sight of the toy, the owner’s button press, or a combination of both events. Since our citizen science experiment specifically required that owners press buttons without performing any other actions, the fact that dogs still displayed contextually appropriate responses in the absence of other cues demonstrates that they attended specifically to the words recorded onto their buttons. Additionally, dogs responded appropriately to an unfamiliar human making button presses, even though that person was unaware of which button they were pressing.

Both word sounds, and the location of their respective buttons on dogs’ soundboards, are highly correlated with their effects in the world. For example, when owners press the button OUTSIDE, dogs might equally attend to the location of that button on their AIC device, or the sound (word) produced by the button press. Our citizen science study addressed this question by comparing dogs’ responses to their owners’ speech and button presses. We found no differences in dogs’ responses to either mode of word production, suggesting that most dogs do not associate button location alone with the outcomes of button presses. If they did, they should perform considerably more context-appropriate behaviours in response to the relevant button presses than to owners’ spoken words. We note that, although this does not necessarily preclude that dogs may have formed two separate associations–one for the button location and another for the speech sound of the word–, it nevertheless demonstrates that most dogs can and do attend to and appropriately respond to the auditory properties of words.

Crucially, we found no differences in dogs’ behaviours across the two experiments, suggesting that owners’ and researchers’ conduct of the methods was sufficiently equivalent to yield comparable results. This suggests that results from the citizen science version of the study were comparable to those performed by researchers during visits to owners’ homes. This is important because soundboard-trained dogs are spread all over the world and we do not yet know how moving their soundboards outside of their home environment might affect their soundboard use [ 2 ]. Our findings offer a promising outlook for future citizen science studies with this population of owners and their dogs: remotely conducted citizen science studies could be a critical tool for studying this large and geographically widespread population and maintaining long-term engagement of pet owners with this research program. However, we advise that other future studies should again validate citizen science methods by comparing them against researcher-led in-person experiments, therefore replicating our findings, before scientists can rely more heavily on citizen science for this and similar studies. Additionally, we highlight that all procedures and analyses of this study were pre-registered in advance of data collection, and that future studies extending on this work would also benefit from pre-registration. Pre-registration offers an important tool for ensuring transparency and reproducibility of research, and its use is critical to large scale exploratory studies and citizen science studies.

In sum, our findings provide the first evidence of button word comprehension by owner-trained soundboard-using dogs, and demonstrate that dogs’ contextually appropriate responses to button presses were comparable regardless of the identity of the person using the soundboard, and the absence of other environmental cues related to that word. Our findings also suggest that dogs attend to the sounds recorded onto their buttons, given that they responded equivalently to words when they were produced by button presses and when they were spoken by their owners. A summary of our findings is tabulated in Table 10 below.

thumbnail

https://doi.org/10.1371/journal.pone.0307189.t010

In our study, dogs responded to spoken or pressed food-related and outside-related words with contextually appropriate responses slightly more often than expected by chance. The accuracy of dog’s responses in our study is comparable to dogs’ accuracy in responding to human pointing [ 24 ]. However, dogs are capable of much greater accuracy when purposely trained to respond to stimuli, as is the case for dogs trained to detect wildlife [ 25 ], individual people [ 26 ], and chemical substances [ 27 , 28 ] through scent. This discrepancy in performance could be due to the nature of the stimuli: perhaps dogs find it more difficult to form associations between words and their respective outcomes compared to scents and a consistent reward. Alternately, communicative contexts may produce less predictable responses due to the weaker correlation between the perception of relevant stimuli (a pointed finger, or a word) and their outcomes, therefore leading to less strongly conditioned responses to the stimuli. While scent detection dogs undergoing training will likely experience reinforcement almost every single time the target stimulus is present, a pet dog may not receive any reinforcement on a great number of occasions when they hear familiar words or observe human pointing. Relatedly, scent detection dogs’ accuracy is much higher in contexts where target odours are present at very high rates, with implications for their performance in real-world scenarios [ 29 ]. Therefore, it is also possible that task-oriented trained contexts, such as scent detection, are much more motivating to dogs, and therefore more likely to trigger consistent responses, than day-to-day contexts involving more variable reinforcement. Further, measures of performance in working dogs such as scent detection dogs are typically based on a small subset of the population that passes stringent standards of training [ 30 ], and typically involves animals that are selectively bred for the purpose of scent detection and further selected based on temperament or cognitive traits [ 30 – 32 ], whereas the soundboard-trained population comprises owner-trained pet dogs, whose temperaments and cognitive traits are likely to vary considerably.

Having established that soundboard-trained dogs can and do attend to and comprehend words, future work should also disambiguate the extent to which spatial information about button positions, or other potential cues for button identity, might aid dogs’ ability to use AIC devices. Additionally, more research is needed investigating soundboard-trained dogs’ responses to a wider range of words, particularly in comparison to a population of non-soundboard-trained pet dogs. Although owners anecdotally report that owner-trained pet dogs spontaneously acquire comprehension of large spoken vocabularies [ 9 ], there are no fully controlled experiments investigating whether dogs exhibit contextually appropriate spontaneous responses to familiar words in the absence of other contextual cues, as in the present study. This is crucial because, although owners may, for example, report that their dogs respond appropriately to a food-related word, this word is typically presented alongside a myriad of other confounding cues, such as the time of day when the dog’s meals are served, the presence of a bowl, or the behaviours their owner might perform before serving their dog’s dinner. Finally, our ongoing work is investigating dogs’ word production [ 2 ]. In order to determine whether dogs’ performance at word comprehension is reflected in their button pressing, carefully controlled future studies must investigate whether dogs can spontaneously produce contextually appropriate button presses in experimentally induced situations (as in [ 4 ]). Not only would such a study be helpful in understanding the depth of soundboard-trained dogs’ word comprehension, but it would also establish the extent to which AIC devices can be used for two-way interspecies communication, involving both presses made by the owner for their dog, and by the dog for their owner.

Acknowledgments

We are grateful to all our dog participants and their owners for contributing their time and data to our research project.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 3. Hunger C. How Stella Learned to Talk: The Groundbreaking Story of the World’s First Talking Dog. New York, NY, US: William Morrow; 2021.
  • 7. Sebeok TA, Rosenthal R, editors. The Clever Hans phenomenon: Communication with horses, whales, apes, and people. Annals of the New York Academy of Sciences. 1981;364: 309–309.

IMAGES

  1. PPT

    presentation mode testing accommodations

  2. Types of Assessment Accommodations Setting Presentation

    presentation mode testing accommodations

  3. PPT

    presentation mode testing accommodations

  4. PPT

    presentation mode testing accommodations

  5. ⚡Presentation "1 Testing Accommodations: What Are They and How Are They

    presentation mode testing accommodations

  6. PPT

    presentation mode testing accommodations

COMMENTS

  1. Presentation Assessment Accommodations

    Presentation accommodations allow students to access test directions or content in ways that do not require them to visually read standard print. These alternate modes of access include visual, tactile, auditory, and a combination of visual and auditory. Sometimes presentation accommodations refer to test instructions only, and sometimes they ...

  2. Presentation Instructional Accommodations

    The latter method is preferable. All text and graphic materials, including labels and captions on pictures, diagrams, maps, charts, exponential numbers, notes, and footnotes, must be presented in at least 18-point type for students who need large print. Students need to work on finding an optimal print size and figuring out the smallest print ...

  3. Assessment

    Presentation Assessment Accommodations. Presentation accommodations allow students to access test directions or content in ways that do not require them to visually read standard print. These alternate modes of access include visual, tactile, auditory, and a combination of visual and auditory. Sometimes presentation accommodations refer to test ...

  4. PDF Testing Accommodations Guidance

    Testing Accommodations. This guide supports individuals who are responsible for determining testing accommodations for students with IEPs. Testing accommodations must be determined on an individual basis and justified based on student needs identified in the Present Levels of Performance. In order for students to be successful with the ...

  5. Assessment Accommodations

    Assessment accommodations target a need associated with a specific disability by modifying how test content is presented or modifying the tools a test taker uses to navigate the testing environment and respond to test questions (International Test Commission and Association of Test Publishers, 2022). Standard provisions should be implemented ...

  6. PDF Testing Accommodations What is an Accommodation?

    An accommodation is a change in the administration of an assessment, such as presentation format, response mode, setting, timing/scheduling, or any combination of these that does not change the construct intended to be measured by the assessment or the meaning of the resulting scores. Accommodations provided to a student during assessments must ...

  7. Testing accommodations

    Testing accommodations generally mirror any accommodations made for the student in the normal classroom environment. They fall under four types of accommodations: timing and scheduling, setting, presentation, and response. Let's take a look at each of these in a little more detail.

  8. Testing Accommodations—5 ways to help students "show what they know"

    Extra time, speech-to-text, word prediction, and screen-reading technology are common accommodations that support learners with disabilities during testing. It's super encouraging that, in general, access to testing accommodations is continuously improving. State standardized tests are now digital, for example, and most have many ...

  9. PDF CHAPTER FOUR Types of Accommodations

    In this chapter, accommodations are organized into four categories: Presentation—how students receive information, Responding—how students show what they know, Setting—how the environment is made accessible for instruction and assessment, and. Scheduling—how time demands and schedules may be adjusted.

  10. ADA Requirements: Testing Accommodations

    To view model testing accommodation practices and for more information about the ADA, please visit our website or call our toll-free number: ADA Website: www.ADA.gov; ADA Information Line: 800-514-0301 (Voice) and 1-833-610-1264 (TTY); M, Tu, W, F: 9:30am - 12pm and 3pm ...

  11. PDF The Effects of Test Accommodation on Test Performance: A Review of the

    The Effects of Test Accommodation on Test Performance: A Review of the Literature1 Stephen G. Sireci, Shuhong Li, and Stanley Scarpati ... we defined test presentation accommodations to include oral administration (i.e., "read aloud" protocols where the test directions and/or items are read to the test taker), changes in test content (e.g ...

  12. PDF State-Provided SAT Suite of Assessments Accommodations Guide

    Section 1 and Section 2 are each composed of 2 equal-length modules of test questions. Each Reading and Writing module lasts 32 minutes, while each Math module lasts. 35 minutes. Each module is separately timed, and students can move backward and forward among questions in a given module before time runs out.

  13. Assessment

    Presentation Accommodations allow students to access test directions or content in ways that do not require them to visually decode standard print. Students with print disabilities (defined as an inability to visually decode standard print because of a physical, sensory, or cognitive disability) may require alternate visual, tactile, or ...

  14. Four Types of Accommodations

    Four Types of Accommodations. 1. Presentation Accommodations: Change how an assignment or assessment is given to a student. These include alternate modes of access which may be auditory, multisensory, tactile, or visual. 2. Response Accommodations. Allow students to complete assignments, assessments, and activities in different ways (alternate ...

  15. PDF ACCOMMODATIONS

    Accommodations are changes that are made in how the student accesses information and demonstrates performance (Rule 6A-6.03411(1)(a), Florida Administrative Code [F.A.C.]). Accommodations are important for students with disabilities. Students use accommodations to increase, maintain or improve academic performance.

  16. Guidelines for Using Accommodations

    Testing accommodations (see Figure 2) are usually categorized as relating to presentation and response mode formats; to timing, scheduling, setting alternatives; and to linguistically based ...

  17. Testing Students with School-Based Accommodations in a Test Center

    School-based testing, one-to-one testing, or home/hospital testing Other (recording responses, modified setting, etc.) See About Accommodations for more information about the particular school-based accommodations you'll be administering. Important Points About Testing with Accommodations. Most school-based students will test in Bluebook™.

  18. Chapter 12 Evaluating Student Progress Flashcards

    Presentation mode testing accommodations. Testing accommodations that change the way test questions and directions are presented to students. Response mode testing accommodations. Testing accommodations that involve making changes in the way students respond to test items or determine their answers.

  19. What is MoSCoW Prioritization?

    MoSCoW prioritization, also known as the MoSCoW method or MoSCoW analysis, is a popular prioritization technique for managing requirements. The acronym MoSCoW represents four categories of initiatives: must-have, should-have, could-have, and won't-have, or will not have right now. Some companies also use the "W" in MoSCoW to mean "wish.".

  20. The MoSCoW Method

    The MoSCoW method is a simple and highly useful approach that enables you to prioritize project tasks as critical and non-critical. MoSCoW stands for: Must - These are tasks that you must complete for the project to be considered a success. Should - These are critical activities that are less urgent than Must tasks.

  21. Response Assessment Accommodations

    What are response assessment accommodations? Response accommodations allow students to respond to test questions in different ways or to solve or organize a response using some type of assistive device or organizer. Responses to test questions in alternate formats need to be carefully copied onto a standard answer form before submitting the ...

  22. How To Prioritise Requirements With The MoSCoW Technique

    Step 1: Identify the purpose and prioritization strategy. Step 2: Make a list of the customer's requirements. Step 3: Make a list of the requirements. Step 4: Assist in the rating of requirements for interrelationships. Step 5: Identify technical and developmental factors.

  23. Chapter 10: MoSCoW Prioritisation

    Prioritise testing, using MoSCoW. 13. Use MoSCoW to prioritise your To Do list. It can be used for activities as well as requirements. 10.9 Summary. MoSCoW (Must Have, Should Have, Could Have, Won't Have this time) is primarily used to prioritise requirements, although the practice is also useful in many other areas. On a typical project ...

  24. How do soundboard-trained dogs respond to human button presses? An

    Past research on interspecies communication has shown that animals can be trained to use Augmentative Interspecies Communication (AIC) devices, such as soundboards, to make simple requests of their caretakers. The recent uptake in AIC devices by hundreds of pet owners around the world offers a novel opportunity to investigate whether AIC is possible with owner-trained family dogs. To answer ...