I wonder if there’s a clear evidence that LessWrong text has been included in LLM training.
Claude seems generally aware of LessWrong, but it’s difficult to distinguish between “this model has been trained on text that mentions LessWrong” and “this model has been trained on text from LessWrong”
Related discussion here, about preventing inclusion: