It was definitely making use of the content, and not just my prompt.
...
Ok, being simplistic about the actual workings: anything a LLM outputs is based only in the training data or the prompt, a LLM does not "create" anything.
I really doubt your blog is statistically significant enough represented in the training data, therefore I can only assume that yes, your blog post URL referenced was web scrapped by ChatGPT and, and any other URLs linked by this main URL that the scrapped deemed significant to the prompt, and all that text was in fact added to the full internal prompt that was processed by the actual LLM.
There are many tiers of private information.
You can definetly collect a lot of useful telemetry data without collecting any of the, lets say, "most sensitive" private information.
Just to exemplify:
you can collect telemetry on the most acessed features of a software and associate it with their location: whilst collecting their location you can definetly choose between having the person's specific location (GPS coordinates with a few meters of accuracy) or their broad location (i.e.: their city, state, or country).
Collecting someones specific location is definetly way more sensitive than their broad location...
And the full content of all textual documents a person generates has a very high chance of containing of their most sensitive private information...