If you’re like most Americans, you’ve already set all manner of goals and resolutions for the New Year. And likewise, if you’re like most Americans, you’ll have entirely abandoned them by February 1.
Studies have found that 23% of people quit their New Year’s resolutions within a week, and almost half drop them by the end of January. Only 9% of Americans actually complete anything from their list in a given year.
The biggest issue, apparently, is that we’re all very bad at setting resolutions. The things we choose are too vague, too hard, or too external.
That got me wondering: Could AI do any better?
Specifically: Can I mine the vast treasure trove of personal information ChatGPT has gleaned from our conversations and use that to set better resolutions for the year ahead?
Turns out, the answer is yes. Here’s how I funneled ChatGPT’s casual disregard for privacy into a list of specific, actionable resolutions for 2026—and how you can do it too.
Remember Me
Many users don’t realize that ChatGPT pays careful attention to every conversation you have with it. It’s constantly eyeing your language choices, facts you share about yourself, and data you upload in order to better understand what makes you tick.
And it retains everything.
This privacy-obliterating feature is called Memory. OpenAI rolled it out in 2024. And it’s been expanded and improved constantly ever since.
OpenAI CEO Sam Altman called Memory one of the most important “breakthrough” areas for AI, and the company is leaning heavily into improving the feature in 2026.
Memory is helpful because it allows ChatGPT to respond to your queries in a more personalized way. If the bot knows you’re a vegetarian, for example, it won’t recommend a meatball sandwich when you ask it for lunch ideas.
But ChatGPT’s Memory can also get extremely granular—and strange.
You can see what the bot knows about you by clicking your profile icon in the ChatGPT interface, choosing Personalization, finding the Memory section, and pressing Manage.
Doing this for myself, I learned, for example, that ChatGPT knows my birthday, my marital status, where I live, and the names of my children.
But bizarrely, it also believes that I’m “writing articles about asphalt” and has stored the fact that I like “straight ASCII quotes” in its vast Memory banks.
While OpenAI talks about Memory as a personalization function to help ChatGPT provide more helpful responses, it’s also likely a way to lock you into OpenAI’s system.
If ChatGPT knows more about you than Gemini, you’re more likely to keep using it. You won’t just flit over to a different chatbot provider every time they roll out a new model, as many users do today.
All that stored info, then, is really there for OpenAI, not for you. But with the right prompting, you can readily access and mine it. Specifically, you can use it to make a killer list of resolutions.
Resolving Wisely
To do so, I fired up the ChatGPT interface and selected the GPT-5.2 model. I then set the bot to the Extended Thinking mode. That configuration ensures that ChatGPT uses its most powerful LLM, and spends as much time as possible processing a given query.
I then gave the bot this prompt (feel free to steal it for your own resolution setting):
“Look back at your memory of the conversations we’ve had over the last year. Based on what you find, make a list of 10 highly specific, actionable New Year’s Resolutions for me for 2026. Cover all aspects of life, including work, health, family, and more. Follow expert guidance and best practices for setting realistic, actionable and truly achievable New Year’s resolutions. Specifically, use your knowledge of me to tailor the resolutions to the things I value and care about, and phrase/structure them in a way that you know will resonate with me personally.”
After thinking for several minutes, ChatGPT responded with a customized list. As requested, the resolutions are very specific. And the bot clearly knows lots about me.
Its first recommendation is to “Run a 45-minute 925 Newsroom Sprint 4 days/week” with the goal of “publishing 3 locally sourced Bay Area Telegraph stories/week (permits, public safety, openings, schools, city hall) and miss no more than 6 weeks total.”
Based on that, ChatGPT clearly knows that I run a local news publication and publish a newsletter about the Bay Area’s “925” region.
But it also seems to know about how much time I take off every year (six weeks), and correctly inferred the kinds of stories I cover for my publication.
For another resolution, ChatGPT advises me to “Hit 30 minutes of licensing progress 5 days/week” and gives specific ways I could do that—a reference to my day job as a news photographer with licensable photos.
I mostly talk with ChatGPT about work, so many of its resolutions focus on my professional life. But it also recommended several health-related resolutions, like “Make LDL-friendly eating automatic with 3 defaults” including “one soluble-fiber item daily (beans, oats, chia, etc.)”
Sometime in 2025 I must have uploaded blood test results and asked the bot to explain them to me. Since then, ChatGPT has apparently been worrying about my LDL cholesterol and would like me to tweak it (thankfully, my actual doctor is not worried).
Other suggested resolutions focus on building a workout routine (including a less-strenuous “dad-of-3 version” for busy weeks), improving my Python coding, and traveling more to photograph hotels for work.
Forget It
Overall, I’m impressed by ChatGPT’s specificity and level of detail. My own real-life list of resolutions is laudable but vague, with items like “be more present in daily life.”
ChatGPT’s, in contrast, are all about metrics, action items, and accountability. Based on expert advice, that’s probably a wise approach.
Still, it creeps me out a bit to see how much ChatGPT knows about me. And it feels stranger because I never specifically asked the bot to remember any of those things—it just decided to retain all the minutiae I dumped into its interface.
That’s fine when ChatGPT remembers things like my preferred format for em dashes, and the fact that I enjoy Jared Bauman’s writing (he’s a friend).
But when the bot starts retaining highly specific medical information based on a conversation I forgot I even had, the whole thing starts to feel invasive.
Thankfully, OpenAI makes it fairly easy to remove specific items from ChatGPT’s memory. You can do so on the same Manage page I referenced earlier. After seeing what the bot knows about me, I deleted several items that were too overtly medical or were simply wrong.
You can also opt to switch off the function entirely, or to use a Temporary Chat for a specific, sensitive query.
Those are short-term fixes, though. As Altman’s “breakthrough” comment suggests, Memory is becoming an increasingly important function of modern AI chatbots.
That means LLMs will almost certainly retain ever more knowledge about us—especially as companies exhaust the performance gains of building ever-bigger models and data centers. And they may not always explicitly share what they know.
For now, you can leverage that knowledge for good and set some resolutions for the year ahead.
But as you do so, might I suggest adding another resolution to your list: “Share less with LLMs. And remember that what you do share they may never truly forget.”
source https://www.fastcompany.com/91470687/chatgpt-new-years-resolutions
Discover more from The Veteran-Owned Business Blog
Subscribe to get the latest posts sent to your email.
You must be logged in to post a comment.