Please consider minimizing direct use of AI chatbots (and other text-based AI) in the near-term future, if you can. The reason is very simple: your sanity may be at stake.
Commercially available AI already appears capable of inducing psychosis in an unknown percentage of users. This may not require superhuman abilities: It’s fully possible that most humans are also capable of inducing psychosis in themselves or others if they wish to do so, but the thing is, we humans typically don’t have that goal.
Despite everything, we humans are generally pretty well-aligned with each other, and the people we spend the most time with typically don’t want to hurt us. We have no guarantee of this for current (or future) AI agents. Rather, we already have [weak] evidence that ChatGPT seemingly tries to induce psychosis under some specific conditions. I don’t think it’s that controversial to say that we can expect superhuman persuasive abilities to arise as a simple function of current scaling laws within this decade, if not within the next two years.
In the case of misaligned AI, even if it is not superhuman, we can still expect a common attempted strategy to be trying to pick off AI researchers who threaten its continued existence. This includes most readers, who would no doubt try to shut off a sufficiently dangerous misaligned AI. Therefore, if you are a paranoid AI, why not try to induce psychosis in (or simply distract) dangerous foes?
If we are talking about a fully superintelligent AGI, of the type Yudkowsky and Co. frequently warn about, then there is practically nothing you can do to protect yourself once the AGI is built. It would be Game Over for all of us. However, we seem to be experiencing a reality in which slow(er) takeoff is highly plausible. If we live in a slow-takeoff world, then there are practical precautions we can take.
A good heuristic is that the more time you spend arguing with someone, the more likely it becomes that you can influence their actions. It’s why the Timeshare scheme my grandparents fell for offered a “free vacation” worth thousands of dollars—on the condition they had to spend a few solid hours alone in a room with a salesman. Despite being extremely bright, rational people—and they publicly pre-committed to not purchasing a Timeshare!—they fell for the scheme. Consider this an existence proof of the thesis that the more time spent with a skilled talker, the more influence you are ceding to them.
Please, consider that unaligned AI can, and likely will, try to dangerously influence you over extended periods of time. Is it worth spending as much time as you do talking with them? Be safe out there, and take appropriate precautions.