I was about to to comment the same, I don't know if I believe this system prompt. It's something that ChatGPT specifically seems to explicitly be instructed to do, since most of my query responses seem to end with "If you want, I can generate a diagram about this" or "would you like to walk through a code example".
Unless they have a whole seperate model run that does only this at the end every time, so they don't want the main response to do it?
Seems they are struggling to correct it after first telling it it's a helpful assistant with various explicit personality traits that would incline it towards such questions. It's like telling it it's a monkey and going on to say "under no circumstances should you say Ook ook ook!"
Unless they have a whole seperate model run that does only this at the end every time, so they don't want the main response to do it?