Claude code has had this for a while (seems old news anyway). In my limited world it really works well, Claude Code has made almost no mistakes for weeks now. It seems to 'get' our structure; we have our own framework which would be very badly received here because it's very opinionated; I am quite against freedom of tools because most people cannot actually really evaluate what is good and what is not for the problem at hand, so we have exactly the tools and api's that always work the best in all cases we encounter and claude seems to work very well like that.
It does seem like the main new thing is that, like ChatGPT, Claude will now occasionally decide for itself to "add" new memories based on the conversation. This did not (and I think does not) apply to Claude Code memories.
What do you think a memory system even is? Would you call writing things down on a piece of paper a memory system? Because it is. Claude Code stores some of its memory in someway and digests it, and that is enough to be called a memory system. It could be intermediary strings of context that it keeps around, we may not know the internals.
I doubt it. It's more for conversational ability to enhance the illusion that Claude knows you. I doubt you'd want old code to bleed into new code on Claude code.