A recent study published in the journal JAMA Network Open examined the potential of artificial intelligence (AI) in the interpretation of chest radiographs in the emergency department (ED). The study aimed to evaluate the accuracy and quality of AI-generated reports compared to reports from teleradiology and in-house radiologists.
Timely interpretation of clinical data is crucial in the ED, as it impacts patient care and outcomes. While there is generally low discrepancy between the interpretations of chest radiographs by ED physicians and radiologists, immediate access to radiologist interpretations can help avoid treatment errors and unnecessary callbacks for discharged patients. As the use of radiology for diagnoses in the ED continues to rise, there is a growing interest in systems that can provide swift interpretations of radiographs to streamline patient processing.
In cases where dedicated radiology services or 24-hour coverage is not available in free-standing EDs, teleradiology services or preliminary resident interpretations are often used. However, these options come with challenges and the possibility of errors due to limited access to complete clinical records. AI, on the other hand, has emerged as a potential solution for interpreting clinical data accurately and rapidly.
The study involved developing an AI tool for the interpretation of chest radiographs and conducting a retrospective evaluation of its performance in the ED setting. The tool utilized an encoder-decoder model based on transformers, which analyzed chest radiograph images as input and generated radiology reports as output.
The test dataset consisted of 500 chest radiographs, excluding images of individuals below 18 or above 89 years of age. Reports from teleradiology and in-house radiologists were also available for comparison. The AI-generated reports, along with teleradiology and final radiologist reports, were deidentified. The reports were evaluated by six board-certified physicians who were blinded to the report type. The physicians rated the clinical accuracy and quality of the reports using a five-point Likert scale. Critical findings that could impact patient management were identified, and any discrepancies were noted.
The results indicated that the AI tool provided radiology reports with similar textual quality and accuracy compared to final radiologist reports. Furthermore, the AI-generated reports showed higher textual quality than teleradiology reports. This suggests that the AI tool has the potential to assist with diagnoses and decision-making in the ED, enhancing patient care.
The study also highlighted specific cases where the AI-generated report outperformed the radiologist report. In one instance, the AI report detected a new infiltrate that was missed by the radiologist. In another case, the AI-generated report correctly identified worsening opacities, while the radiologist report overlooked this significant finding. These examples demonstrate the AI tool’s ability to provide valuable insights for differential diagnoses and further evaluation recommendations.
Overall, the study concluded that the AI tool offers rapid and accurate interpretation of chest radiographs in the ED, comparable to radiologist reports and superior to teleradiology reports in textual quality. The short processing time and high accuracy of the AI tool have the potential to streamline patient processing in the ED, assisting physicians in making informed decisions promptly. As AI continues to evolve, it may become an invaluable resource for improving efficiency and quality of care in emergency departments.
Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it