I read an article by AP medical writer Mike Stobbe published April 28, which was titled “More kids have autism, better diagnosis may be the reason.”

Stobbe cites an increase in children “identified” as having autism from one in 68 in 2010 and 2012 to one in 59 in 2014. “Identified” suggests a correct decision every time. No error. I would have used the term “diagnosed.”

Suppose you supervise university admissions and you decide who to admit and who to reject. If you had an admissions test perfectly related to future student performance, you could simply admit the top test scorers. No such measure exists.

Today, perfect prediction of university performance is impossible. So some students you admit should be rejected and some you reject should be admitted. The former we call false positives. The latter we call false negatives. The rest of the decisions are correct.

Without perfect prediction, there is potential for false positives and false negatives. This is part of the general truth that statistical inference involves the possibility of error. We don’t expect to build perfect decisions on imperfect information.

Stobbe writes that “There are no blood or biological tests for autism. It’s identified by making judgments about a child’s behavior. Traditionally, autism was diagnosed only in kids with severe language and social impairments and unusual repetitious behaviors. But the definition gradually expanded and autism now is shorthand for a group of milder , related conditions.”

This might have something to do with why “For years, the estimate was increasing in leaps and bounds, though it wasn’t clear why. A report released in 2007 put the estimate at 1 in 150 … .” Seems likely that sliding the criterion to admit more positives will get you bigger estimates.

In the extreme, if we were to diagnose every child as autistic, we would never miss a child with autism. There would never be a false negative. But that absurd tactic leads to an unacceptable rate of false positives.

Stobbe also cites Heather Cody Hazlett, a University of North Carolina psychologist, who studies new ways to spot autism earlier, as saying what’s discouraging is that fewer than half of autistic children are diagnosed by the time they turn 4. I would rephrase that claim to read that fewer than half of the children who are eventually diagnosed as being autistic are diagnosed by the time they turn 4.

This leaves the question open as to whether calling children autistic ever involved a wrong decision. Were some of them false positives? Given that some of the research involves studying records of diagnoses rather than observation of the children, the whole question of comparability of raters needs to be addressed as well.

The rest of the article suggests a well-intentioned urgency to diagnose autism as early as possible. Doctors trying to be cautious and not alarmist can lead to a delay in therapy or other services. “The CDC’s Deborah Christensen … said ‘We need to do work to make sure that children with developmental concerns are evaluated quickly.’ ”

The literature suggests some people recover from autism. Or is some of that simply false positives finally getting correct diagnosis. Does anyone admit a diagnostic error? The autistic child simply recovers? Summaries of the disagreements in classification of people as autistic or not make for interesting reading.

The Childhood Autism Rating Scale (CARS) has been adopted widely for gauging the presence and severity of autism symptoms in both children and adolescents. Analysis of data from reports of original research that used CARS between 1980 and 2012 showed inter-rater reliability of about .80. Some simplifying assumptions and a little arithmetic lead to the estimate that false positives and false negatives both occur about 20 percent of the time.

So, if this is even close, one-fifth of the people are misdiagnosed. I wonder about the fate of people incorrectly labeled as autistic.

Richard S. Bogartz is a professor of psychology at the University of Massachusetts Amherst.