@dangoodin @mttaggart @Em0nM4stodon Look at LoRAs - low-rank adaptations that are above the foundation model.
Loras still need training, and are used to tune output to more specific cases, specializing the foundation model further.
@dangoodin @mttaggart @Em0nM4stodon Look at LoRAs - low-rank adaptations that are above the foundation model.
Loras still need training, and are used to tune output to more specific cases, specializing the foundation model further.
Interesting. Like what? What has OpenAI said about data it obtains on other ChatGPT services?
@dangoodin @Em0nM4stodon Their full policy is "We absolutely train our models on your data, but you can opt out." https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance
But nowhere is "foundation" specified. I'm on the outside looking in, but I am confident that what the customer sees is not the raw "foundation" model. It's an iterative one that has been more rapidly fine-tuned. And that fine-tuning uses models which themselves could be trained on health data. The specificity, especially when absent elsewhere, is notable.
@dangoodin @Em0nM4stodon Here, foundation models are mentioned, and that user data is used to train them. https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-foundation-models-are-developed
But again, the full application architecture is not clear.
So your suspicion is that while OpenAI lets you opt out of training of their "models," it's possible that OpenAI still uses that data to train "foundation models," which may be connected to the models somehow?
@dangoodin @Em0nM4stodon Not exactly. I have no reason to doubt that opt-out is opt-out. My suspicion is around the language regarding Health data.
Can you be more explicit what that suspicion is exactly?
@dangoodin @mttaggart @Em0nM4stodon Look at LoRAs - low-rank adaptations that are above the foundation model.
Loras still need training, and are used to tune output to more specific cases, specializing the foundation model further.
@neurovagrant @dangoodin @mttaggart @Em0nM4stodon personally, I'd say no #US company can make any privacy claims by design because #CloudAct exists and that applies to everyone (regardless if #ClosedAI or #Signal) having personnel, office, infrastructure or offering services from within the #USA.
#NotLegalAdvice but Cloud Act is irreconcileable with any #privacy & #dataProtection laws, not just #GDPR & #BDSG, but even #HIPAA & #PCIDSS!