I would be somewhat skeptical about any claims suggesting that results have been verified in some form by coordinators. At the closing party, AI company representatives were, disappointingly, walking around with laptops and asking coordinators to evaluate these scripts on-the-spot (presumably so that results could be published quickly). This isn’t akin to the actual coordination process, in which marks are determined through consultation with (a) confidential marking schemes*, (b) input from leaders, and importantly (c) discussion and input from other coordinators and problem captains, for the purposes of maintaining consistency in our marks.
Echoing the penultimate paragraph of there were no formal agreements or regulations or parameters governing AI participation. With no details about the actual nature of potential “official IMO certification”, there were several concerns about scientific validity and transparency (e.g. contestants who score zero on a problem still have their mark published).