If you’re like many people, you’ve probably given up on your New Year’s resolutions by now. Setting goals is difficult; It’s hard to maintain them – and even fail can bring Unpleasant feelings about yourself.
This year, in an effort to game the system and tilt the scales towards success, some people People used aye For their 2026 resolutions. This is the latest step in an ongoing trend: In September 2025, OpenAI, the company behind ChatGPT, released conclusion This shows that it is very common to use AI chatbots for personal guidance.
of the company Explanation This meant that “people valued ChatGPT most as an advisor rather than simply completing tasks.”
But just because you can ask AI for life advice, should you do it? And is there any art to it? Here’s what experts say about the dos and don’ts.
Advantages and disadvantages of chatbot guidance
AI-powered goal-setting is not inherently good or bad, explains Zainab Iftikhar, a PhD candidate at Brown University, whose Research Investigates artificial intelligence and the well-being of users. She suggests that artificial intelligence can lower the barrier to self-reflection and actually become empowering for some people. Iftikhar says that for those who feel stuck, overwhelmed, or unsure where to start, prompts can “act as a scaffold” to express and understand your ideas.
If an AI has access to the information you’ve shared or asked it to generate, it’s also an efficient tool at synthesizing that information, explains Xiang Xiao, assistant professor of computer science at Johns Hopkins University. Compiling and interpreting your past data can help you efficiently organize the ideas that trigger your goals.
But there are also drawbacks to using AI for goal-setting, says Iftikhar. Dealing with the potential pitfalls may come down to how well you know yourself – and how well you can deal with bad AI advice.
Risks of using AI for personal development
Because large language models (LLM), the type of AI that runs these systems, are trained on massive amounts of human-generated data, they can reproduce beliefs about success, self-improvement, and relationships, Iftikhar explains. LLM are also mainly trained on English lessons and tend to show prejudice Towards Western values.
There is a risk of AI-suggested goals being over-generalized, “reinforcing dominant cultural narratives rather than being meaningful to a specific individual,” says Iftikhar.
This bias can be very difficult to detect. Xiao says that AI chatbots can be persuasive in such a way that individuals may have difficulty detecting if they are being motivated toward mismatched goals. These tools may “inappropriately confirm goals that may not actually be a good fit for you,” he says.
Even if you use the chatbot frequently and request that it base its responses specifically on past conversations, there’s still a chance that the chatbot’s responses will include insights that have nothing to do with information you’ve already shared, he explains.
During his research, Iftikhar observed that people who are regularly correcting or ignoring bad AI responses are at an advantage in using AI. She explains that those who do not are “more likely to suffer from inaccurate or harmful reactions” for a variety of reasons, including technical expertise.
AI may also reflect the biases of the user seeking guidance. in 2024 StudyXiao and colleagues observed that LLM users were more likely to be trapped in an echo chamber than those using traditional web searches.
Xiao explains that AI chatbots are designed to make us happy. in 2025 paper Published in the journal NPJ Digital Medicine, researchers showed that LLMs often prioritize agreement over accuracy. These devices are generally customized With a human response that rewards agreeableness and flattery.
In turn, chatbots engage in sycophancy, or excessive agreeableness, with users. (In May 2025, OpenAI announced It was rolling back an update that made ChatGPT a lot more sycophantic.)
how to become better at aiming-Settings with AI
Iftikhar says to be wary of tools that forego self-reflection or emotional processing in favor of streamlined action plans.
That said, AI can help us brainstorm actionable goals we might want to set for ourselves, says Emily Balcetis, an associate professor of psychology at New York University. She recommends prompting the AI to consider what obstacles you might encounter when attempting to accomplish these goals, as well as what back-up plans you might need.
“Make it collaborative in how you’ll track your progress as well as monitor performance,” says Balcetis.
Xiao recommends critically analyzing the chatbot’s responses and then reacting to it. Does this plan really fit into your life? Is it in line with your priorities and hopes?
“Try to give informative, quality feedback to the AI just like you would give feedback to any other person,” says Xiao. “This process will help the AI generate more attractive, realistic goals and help you consider the things you really want.”
EJ Masicampo, associate professor of psychology at Wake Forest University, explains that good goal-setting also involves reviewing why you haven’t pursued these goals in the first place.
“When it feels like we’re failing at a goal, it’s often because we’ve prioritized other things we’re trying to do,” says Masicampo. He explains that many goals are difficult to achieve. It may be more productive to examine one ambition and what is hindering your motivation to achieve it.
Ultimately, chatbots can work best as reflective partners, even partners who don’t really care about your success.
“These devices look exactly like humans, but by design they can’t take responsibility for your actions,” says Xiao.
For better or worse, it’s up to you.
