Autonomous High-Quality Image Editing Triplet Mining
AI

Autonomous High-Quality Image Editing Triplet Mining

[Submitted on 18 Jul 2025] View a PDF of the paper titled NoHumansRequired: Autonomous High-Quality Image Editing Triplet Mining, by Maksim Kuprashevich and Grigorii Alekseenko and Irina Tolstykh and Georgii Fedorov and Bulat Suleimanov and Vladimir Dokholyan and Aleksandr Gordeev View PDF Abstract:Recent advances in generative modeling enable image editing assistants that follow natural language […]

Autonomous High-Quality Image Editing Triplet Mining
AI

Automated Face Blurring and Human Movement Kinematics Extraction from Videos Recorded in Clinical Settings

[Submitted on 21 Feb 2024 (v1), last revised 18 Jul 2025 (this version, v2)] View a PDF of the paper titled SecurePose: Automated Face Blurring and Human Movement Kinematics Extraction from Videos Recorded in Clinical Settings, by Rishabh Bajpai and Bhooma Aravamuthan View PDF Abstract:Movement disorder diagnosis often relies on expert evaluation of patient videos,

Autonomous High-Quality Image Editing Triplet Mining
AI

[2409.04617] Sparse Rewards Can Self-Train Dialogue Agents

[Submitted on 6 Sep 2024 (v1), last revised 18 Jul 2025 (this version, v3)] View a PDF of the paper titled Sparse Rewards Can Self-Train Dialogue Agents, by Barrett Martin Lattimer and 3 other authors View PDF Abstract:Recent advancements in state-of-the-art (SOTA) Large Language Model (LLM) agents, especially in multi-turn dialogue tasks, have been primarily

AI Safety course intro blog
AI

AI Safety course intro blog

Published on July 21, 2025 2:35 AM GMT This is a linkpost for the intro course for CS 2881: AI Safety. It is mostly intended for Harvard/MIT students considering taking the course but could be of interest to others. Discuss

What Eliezer got wrong about evolution — LessWrong
AI

What Eliezer got wrong about evolution — LessWrong

This post is for deconfusing:  Ⅰ. what is meant with AI and evolution. Ⅱ. how evolution actually works.Ⅲ. the stability of AI goals.Ⅳ. the controllability of AI. Along the way, I address some common conceptions of each in the alignment community, as described well but mistakenly by Eliezer Yudkowsky.  Ⅰ. Definitions and distinctions By far the

AI Safety course intro blog
AI

Your AI Safety org could get EU funding up to €9.08M. Here’s how (+ free personalized support) — LessWrong

Thanks to @Manuel Allgaier of AI Safety Berlin for his suggestion to write this post and his helpful feedback. And thanks to LW/AI Alignment Moderator Oliver for looking over the post.  LessWrong and AI safety have a unique opportunity: The EU is funding important projects to research AI safety, ranging from AI risk modelling, AI accountability,

Make More Grayspaces — LessWrong
AI

Make More Grayspaces — LessWrong

Author’s note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it’s only paid subscriptions that make it possible for me to write. If you find a coffee’s worth of value in this or any of my other work,

Scroll to Top