Assessment in Adult Education

Barbara E. Smith, Ed.D.
Hudson River Center for Program Development, Inc.


Of all the issues facing adult educators in the past few decades, assessment has been the cause of more frustration and disgruntlement than any other.Assessing adult learners is a complex task that can be difficult for many teachers. Not all students have the same level of knowledge and abilities, experience with the top writing service, therefore the assessment must be subjective and correct. In addition, adult students have different life and professional experiences that can affect their learning and understanding of the material. The complexity of assessment, as it pertains to adult education, has been unraveled and scrutinized. Great minds have suggested numerous strategies at countless forums. Recommendations and requirements have been generated at every governmental level. More systems and tests have been developed. And still, the discussion persists.

It is very possible that the dissatisfaction exists, not from the recommended or required battery of tests, but from the extent to which the results of the tests leave some rather basic questions unanswered. After all, assessment results are used to: place learners at appropriate instructional levels, measure ongoing adult learner progress, qualify adult learners enrolling in academic or job training programs, verify program effectiveness, and demonstrate learner gains to justify continued funding (Burt and Keenan, 1995). There is considerable disagreement and not a little evidence that neither batteries of tests nor any single measure can effectively accomplish even one of these uses, let alone all of them. And still the basic questions are unanswered.

The questions heard most often from adult educators are:

  1. How is literacy defined in my program?
  2. How can I find out where to "place" a prospective adult learner in my class or program?
  3. What assessment can I use to measure adult learner gain?
  4. What other measures can I use to determine adult learner success?

Answers to these questions would add clarity to an area of adult education that causes great consternation among its practitioners.

Current Governmental Requirements

The New York State Education Department (SED), like most state education agencies, has questions of its own, such as if adult learners are successful in the adult education programs funded by it. Therefore, the department requires some testing as evidence that adult education programs are doing a good job and thus are eligible for continued funding.

Most adult and continuing education programs are subject to SED requirements. Currently, these requirements include: "Unless a waiver has been granted, the Test of Adult Basic Education or TABE test should be used for reading and math and the NYS Place Test used for English for Speakers of Other Languages or ESOL" (SED Memorandum, 1997). As the above requirements have been in effect for a number of years, one might correctly assume that adult education programs use the TABE and the NYS Place Test for assessment.

The federal government also wants to know that its funded programs are successful. In its direction to the states, the federal government requires that there is development and implementation of program quality related to recruitment, retention, and literacy skill improvement. States must "…gather and analyze data (including standardized test data) to determine the extent to which the adult programs are achieving the goals set forth in the plan" (Venezky, 1994). No specific tests or benchmarks are suggested by the United States Department of Education (USDOE), although the TABE and the Comprehensive Adult Student Assessment System (CASAS) benchmarks are included as illustrations in their guidelines. Of course, states must modify their own reporting to be consistent with federal definitions for federal reporting.

Definitions of Literacy

How literacy is defined in an adult education program is critical to assessment practice. Defining literacy has been the subject of much discussion for decades. Venezky, Wagner, and Ciliberti assert that definitions of literacy are more political than scientific because they are intended to classify and certify people by ability or privilege (1990). The authors continue "…definitions and measurements of literacy are necessarily interdependent because changes in one entail changes in the other." Is literacy defined as the possession of basic skills in reading and mathematics or is it an ability to function in society to achieve one's goals? What are the implications for assessment in each choice?

In the basic skills versus functional or applied literacy debate, there are advantages to each. According to Venezky, assessments of basic skills can be translated directly into instructional policy while, in contrast, functional literacy assessments have no direct instructional implications (1996). Functional literacy skills are derived from specific documents and tasks and, thus, provide no basis for predicting behaviors in other contexts. While the basic skills approach does have applicability from one context to another, it far too often approximates previous unsuccessful experiences in traditional schooling.

Sabatini, et al., suggests some common sense starting points to resolve the argument that adult literacy assessment should match the instructional program (1995). Lytle and Wolfe underscore this — observing that it is important that there is congruence between particular approaches to assessment and a program's curricula and teaching practices (1989). It is fair to say that most adult education programs still employ a basic skills approach. Nonetheless, to the extent that goals in adult literacy should be the adult learner's goals, a functional literacy approach may be dictated under certain circumstances. It's possible that flexibility should be inherent to assessment practices.

In addition to the basic skills versus functional literacy debate, another issue in defining literacy is that it is neither a distinct event nor a package of predetermined skills (Taylor and Dorsey-Gaines, 1988). Nonetheless, complex boundaries have been set up to measure literacy. These boundaries do not really exist. Instead, each of us as individuals possess literacy proficiency. This proficiency is a function of our experiences, our culture, our ethnic and racial background, our age, our gender, our socio-economic status, our health status, our work life, our family life, etc. According to Paris and Parecki, new theories of adult literacy emphasize the kind of thinking and motivational beliefs that adults bring to various literacy activities based on their individual experiences (1993). Again, these experiences dictate the instructional approach and, hence, the approach to assessment.

How the New York State Education Department addresses literacy determines assessment practices in its adult education programs. There are many questions in the discussion. Is literacy the key to lifelong learning as described in Adult Literacy: The Key to Lifelong Learning (1992)? Is literacy the acquisition of basic reading and mathematics? Is it the ability to function within a prescribed framework of skills and activities? Is it both? Is literacy the achievement of the adult learner's goals? Are some literacy goals so compelling as to be mandated regardless of the learner's goals? Should individual programs define literacy and thus assessment practices? Should SED have a series of guidelines related to the importance of consistency between the definition of literacy and assessment practices, describing a number of options? The definition of adult literacy and the correlative value of adult learners' goals must begin the discussion that will lead to decisions about assessment practices.

Placing Adult Learners at Appropriate Instructional Levels

Most literacy providers would agree that assessment is a multi-faceted process that, for many reasons, often incorporates a linear approach. Knowing at what levels new adult learners should be placed is a reason often given for testing.

An alternative to consider is self-placement, i.e., examples of different levels of materials used at each instructional level are provided so adult learners can decide where they might best begin (Venezky, 1993). Self-placement avoids the grade-level characterization so hurtful for people who have had unsuccessful, even destructive experiences in traditional schooling. Weigh the alternatives. What is the penalty for an adult learner being given the incorrect instructional program? Rarely is that penalty serious. The adult learner is merely moved to another class or different instructional material. He/she can be told when using self-placement that sometimes it is necessary to move to another class or program after a few weeks.

If a test must be given for placement, lengthy procedures are not necessary according to Venezky (1993). The study, a joint project of SED, the University of Delaware, and the Adult and Continuing Education Program of the White Plains Public Schools, recommends the TABE Locator or even the TABE Vocabulary Locator as effective and reasonable options for placement in many programs.

Placement in New York's ESOL adult education classes mandates the use of the NYS Place Test. This test was developed in the mid-1980s by an Adult ESLTest Committee comprised of ESL educators from New York and surrounding states. The test components include an oral warm-up, a basic English literacy screening, and an oral assessment with pictures. Consistent with Venezky's recommendation, the placement test is brief, taking 10 - 15 minutes to administer (1993). The oral assessment with pictures is designed to assign adult learners to one of four proficiency levels. Proficiency levels are potentially less damaging than grade levels. However, it is unclear whether self-placement has been pursued as an option for ESOL classes.

New York's non-traditional adult education programs have also weighed in on the placement debate. At a recent conference entitled "Linking Standards to Non-Traditional Programs," a consensus was reached on the ideal initial placement test or assessment, i.e., the test takes no more than 40 minutes, can be completed at home, and provides entry information reading scores required by SED (Bodner, 1998). In their discussions, the conference participants noted Venezky's work (1993). They also suggested self-selection or self-placement activities, among others. The participants recommended further discussion as well as possibly pursuing the development of a grade-level, controlled series of reading passages.

Regardless of the kind of adult education program or setting, common sense applies in placing adult learners in the most appropriate instructional setting and level. Learners' goals and their sense of the skills they possess are most important in determining placement. If testing is absolutely necessary, the use of a locator or placement test seems to be indicated. For non-traditional programs, the changing environment must also be taken into account. In all cases, it is critical to make the learners as comfortable as possible before, during, and after the assessment.

Measuring Adult Learner Gain

Adult learner gain may be the most difficult assessment issue, given the current thinking among experts. Placement is important but, if misplacements occur, it is easy enough to correct them without grave consequences. But policymakers need to know how many adult learners stay long enough to reach their desired level of functioning, i.e., learner progress leading to program outcome. Currently, that means measuring gain and therein lies the problem.

Though problematic, measuring whether adult learners are performing in a literacy program is very important. Holt maintains that a variety of instruments and procedures should be used to ensure that programs are "….identifying learners' needs, documenting the learners' progress toward meeting their own goals, and ascertaining the extent to which the project objectives are being met" (1994). Lytle and Wolfe support a mix of assessment practices when they question whether any single measure is capable of capturing the repertoire of skills and strategies an individual needs to accomplish a variety of literacy tasks. The authors continue that multiple methods of assessment seem inevitable.

Whatever the multiplicity of measures used to accomplish the various purposes, there are essentially four different approaches available for literacy assessment. Each of these approaches involves collecting and analyzing data provided by the adult learners in order to make judgments about the literacy accomplishments of individuals or groups (Imel, 1990). These approaches include: standardized testing, materials-based assessment, competency-based assessment, and participatory assessment.

Standardized testing is used in adult literacy assessment across the country more than any other approach. It is easy and inexpensive to administer. Sticht defines a standardized test as being designed to be given under specified, standard conditions or it is invalid (1990). Two kinds of standardized tests are used: criterion-referenced and norm-referenced. Criterion-referenced tests assess a learner's achievement against an absolute standard according to Sticht. Lytle and Wolfe note that norm-referenced tests, which measure an individual's performance against a "normal performance," are used most often. The TABE and the Tests of Applied Literacy Skills (TALS) are both examples of norm-referenced tests.

The TABE test is the test most commonly used across the country for multiple purposes. In part, this current discussion focuses on whether the TABE should be used to measure adult learner progress as is the current requirement in New York for Adult Basic Education (ABE) programs. There are problems with this approach. Venezky, et al. note that, while there may be some correlation across tests due to test-taking skills and general intelligence, there is no evidence of a similar correlation for change scores (1994). The pre/post test approach makes it appear that there is linear growth while actual growth in literacy skills is a much more complex cognitive process interwoven with the learners' experiences. Another problem is the use of grade-equivalent scales. Telling an adult that he/she reads or writes at a 4th grade level may be more demeaning than informative.

On the other hand, TALS, while norm-referenced, does not rely on a grade equivalency scale but rather a scale linked to a national survey of young adults. Sabatini (1995) notes that an individual's scale score on the TALS represents the level of difficulty of printed materials and associated tasks that the adult is likely to be able to perform competently. Thus, adult learners are ranked but there are no instructional levels implied. Conversely, with the TABE, if someone has a 4th grade reading level, the instructional strategies needed are evident. Again, this is a reflection of the functional literacy versus basic skills debate.

Assessment practice and the solution to the debate would be simpler if the TABE and TALS tests could be used interchangeably depending upon the adult goals. It appears, based on research by Sabatini, et al., that the TABE Mathematics and Applications Test is a stronger predictor of the TALS Document Test than the TABE Reading Comprehension scores. The commonality between the math and the document tests may be that problem-solving skills are required to do well in each. Be that as it may, the conflicting results limit combined use of the TABE and TALS to very specific circumstances. The perplexing results of the authors' research underscores the need to utilize a common-sense approach to assessment.

In coming to terms with measuring adult learner gain, it is important to remember a basic tenet of adult education: valuing adult learners' goals. Generally, these goals may be either specific in the acquisition of basic skills or specialized in achieving applied literacy skill goals. Venezky (1996) suggests dividing learners by their expressed goals into general literacy (basic skills) or specialized skill (functional literacy) programs. For example, if reading comprehension is desired, the TABE Reading Comprehension Test should be used to measure gain. For vocabulary, use a vocabulary test. If interpreting and using everyday documents is a goal, then use the TALS Document Test. Use a test to measure what is being taught, since what is being taught is a reflection of the adult learner's goals.

Regardless of the standardized test used, remember, when using gain scores, pretest scores are based on a test generally administered within the first week of enrollment. Since most adult students have not encountered formal testing since leaving school, scores are artificially low. As learners spend more time in school, scores rise significantly based only on practice (Venezky, 1996).

If pre/post test results are to be used, then common sense approaches are in order. First, wait a while until the adult learner is acclimated to his/her surroundings and comfortable with the assessor. Second, coaching is permitted, even recommended. What are the penalties for working quickly? What happens to those who prefer to work more slowly? Are there penalties for guessing? Strategies suggested in the Adult Education Resource Guide and Learning Standards (1998) for people taking the tests of General Education Development (GED) can be helpful in better preparing learners.

Although focus is on standardized testing, there are other means of assessment that can uncover much helpful information and assist in instructional planning. Materials-based assessment, the second assessment category, refers to the practice of evaluating learners upon completion of a set of materials (Imel, 1990). This material is often available through commercial publishers, is linked to the instructional resources, and is easy to administer. On the other hand, the concept of literacy under these circumstances is generally limited to assessing reading skills. In addition, most of this material is prepackaged, limiting creativity and the ability to tailor for specific learner goals.

The third category is competency-based assessment. It bears similarities to criterion-referenced standardized testing in that an individual's performance is measured against a predetermined standard of acceptable performance (Imel, 1990). If looking at a functional literacy approach, competency-based assessment recognizes the range of experiences that individuals already have. Testing frequency and regular feedback are two strengths of this kind of assessment. CASAS is described as competency-based curriculum management, assessment and evaluation systems. It is based on content areas of the CASAS Competency List including: consumer economics, community resources, health, employment, government and law, mathematical computation, learning to learn, and independent living skills. While competency-based in part, the assessment instruments include standardized testing and performance-based assessment as well.

Finally, participatory assessment (alternately characterized as alternative, classroom-based, authentic, performance-based or congruent) views the process as incorporating a broad range of strategies that provide an active role for learners. Paris and Parecki see participatory assessment as a way of collecting evidence through stimulated reflection such as asking adults about literacy events and their meaning, the importance of strategies, and the methods of instruction used to acquire literacy. Examples include surveys, interviews, learner self-assessment, portfolios, journals, and observation measures.

Adult educators appear to be increasing their use of alternative assessment and view it as highly instructive to both them and the adult learners. Self-assessment, as one testing strategy, provides opportunities for adults to maximize their independence and sense of control over their environment. Performance-based assessment offers the opportunity for adult learners to measure the achievement of their goals rather than imposed or mandated goals. Under these circumstances, perhaps certificates of mastery would be appropriate for the imprimatur that the adult learners have achieved their goals.

While the value of participatory assessment may be in its flexibility and capability to actively engage adult learners, there may still be some need for standardized data. In particular, Balliro does not regard participatory assessment as a panacea since funders and higher institutions may require "hard data" to meet eligibility or certification requirements (1993).

Each of the forms of assessment has strengths and weaknesses in measuring adult learner gain. So how do literacy providers build an assessment practice or protocol that pairs the appropriate assessment tools with overall purposes and instructional approaches? Nurss (1989) proposes the following questions in selecting assessment instruments and procedures for use in adult literacy programs:

  1. What is the purpose of the assessment?
  2. Is the assessment instrument appropriate for use with adults?
  3. How reliable, valid, and practical is the instrument?
  4. Is the instrument culturally sensitive?
  5. Is there congruence between the instrument/approach and the instruction?

Each question may or may not be appropriate for every decision related to assessment. Nonetheless, measuring gain is a serious subject with serious consequences for adult learners, adult educators, policymakers, and funders. More examination is probably better than less. To underscore the seriousness of this venture, Sabatini cautions "….in attempting to measure gain, scrupulous care must be taken or any number of factors may result in spurious outcomes in individual or aggregate measures."

Other Adult Learner Performance Measures

There are other measures that can be used to assess adult learner gain. In planning these measures, adult learner goals are used as the foundation. In fact, in the first few interviews, the adult learner and adult educator can develop a scoring guide as an observation measure to evaluate or assess progress. For example, if someone wants to learn how to apply for a job, there are several steps that must occur and can be assessed: reading the want ads, making telephone calls, constructing a resume, practicing an interview, etc. As each of these steps occurs, it can be noted and assessed as to quality of the completed task. A parent may wish to read books to his/her children. Again, with the adult learner, a series of steps can be planned: going to the library with the family to get library cards, taking books out of the library, reading a short, easy book to the children each day, reading harder, longer books, etc. Accomplishment of the steps, the timeliness, and quality all can be noted in the scoring guide. Other performance-based assessments, such as journals, portfolios, and interviews, can be used to measure learner progress. These measures and other assessments, including standardized tests, paint a more complete picture of the adult learner's success in adult literacy.


As each of the questions related to assessment has been addressed, there are several important trends to be noted in summary. First is the importance of the adult learner's goals. It is impossible to appropriately assess the adult learner without his/her goals. Second, common sense must be applied at all levels of assessment but particularly in placement. As goals are elicited from the adult learner, self-placement can be included in the interview process, i.e., self-identification of instructional materials. Third, if using standardized tests, the testing must match the instruction. If applied literacy skills are needed, e.g., using public transportation, doing income tax forms, reading a menu, etc., then an applied literacy test must be used. If reading comprehension skills are desired, a reading comprehension test is required. Finally, planning self-assessment measures with the adult learner as his/her goals are revealed is critical to determining success.

The discussions of the past few decades have revealed the complex connections between cognitive processes and life experiences and the difficulties inherent in trying to measure aspects of them. On the other hand, research during this decade seems to indicate that there are assessment instruments that can be used to answer the questions posed by adult learners, adult educators, policymakers, and funders. With caution, common sense, and dialogue, it is possible to measure the extent to which adult learners are successful in adult literacy programs in New York State.


Adult Education Resource Guide and Learning Standards (1998). Albany, NY: New York State Education Department.

Adult Literacy: The Key to Lifelong Learning (1992). Albany, NY: New York State Education Department.

Balliro, L. (1993). What kind of alternative? Examining alternative assessment. TESOL Quarterly, 27 (3), 558-560.

Bodner, C. (1998). Reflections on Intake Assessments for Non-traditional Programs.

Bond, L.A. (1996). Norm- and criterion-referenced testing. ERIC Digest, TM 96-09.

Burt, M. & Keenan, F. (1995). Adult ESL learner assessment: Purposes and tools. ERIC Digest, 73.

Farr, C.W., Moon, C.E., & Williams, A. (1986). Correlating the test of adult basic education and the test of general education development. Lifelong Learning, 9, 17 - 19.

Holt, D.D. (Ed.). (1994). Assessing Success in Family Literacy Projects: Alternative Approaches to Assessment and Evaluation. Washington, D.D. and McHenry, IL: Center for Applied Linguistics and Delta Systems.

Imel, Susan (1990). Adult literacy learner assessment. ERIC Digest,103.

Kutner, M., Webb, L., & Matheson N. (1996). A review of statewide learner competency and assessment systems. Draft Report, Washington D.C.: Pelavin Research Institute.

Lytle, S.L. & Wolfe, M. (1989). Adult literacy education: Program evaluation and learner assessment. Information Series no. 338. Columbus: ERIC Clearinghouse on Adult, Career, and Vocational Education, Center on Education and Training for Employment, The Ohio State University.

New York State Education Department Memorandum. (1997).

Nurss, J.R. (1989). Assessment Models and Instruments: Adult Populations. Atlanta, GA: Center for the Study of Adult Literacy, Georgia State University.

Paris, S. & Parecki, A. (1993) Metacognitive aspects of adult literacy. Technical Report (TR93-09), Philadelphia, Pennsylvania: National Center on Adult Literacy.

Rogers, Alan (1992). Achievements and outcomes: Evaluation, adult education and development. Adults Learning, 4 (3), 69 - 72.

Sabatini, J.P., Venezky, R.L., & Bristow, P.S. (1995). Comparing applied literacy and basic skills tests as measures of adult literacy performance. Technical Report (TR95-03), Philadelphia, Pennsylvania: National Center on Adult Literacy. National Center on Adult Literacy.

Sticht, T.D. (1990). Testing and Assessment in Adult Basic Education and English as a Second Language Programs. San Diego, CA: Applied Behavioral and Cognitive Sciences, Inc.

Taylor, D. & Dorsey-Gaines, C. (1988). Growing Up Literate: Learning from Inner City Families. Portsmouth, NH: Heinemann.

United States Department of Education. Annual Performance and Financial Reports.

Venezky, R.L. (1996). Literacy assessment in the service of literacy policy. Technical Report (TR95-02), Philadelphia, Pennsylvania: National Center on Adult Literacy.

Venezky, R.L., Bristow, P.S., & Sabatini, J.P. (1994). Measuring change in adult literacy programs: Enduring issues and a few answers. Educational Assessment, 2(2), 101-131.

Venezky, R.L., Bristow, P.S., & Sabatini, J.P. (1994). Measuring gain in adult literacy programs. Technical Report (TR93-12), Philadelphia, Pennsylvania: National Center on Adult Literacy.

Venezky, R.L., Bristow, P. S., & Sabatini, J.P. (1993). When less is more: A comparative analysis for placing students in adult literacy classes. Technical Report (TR93-08), Philadelphia, Pennsylvania: National Center on Adult Literacy.

Venezky, R.L., Wagner, D.A., & Ciliberti, B.S. (1990). Toward Defining Literacy. Newark, DE: International Reading Association.

Hudson River Center for Program Development, Inc

Hudson River Center Home Page