I skimmed a paper last night about augmenting a language model by supplying a database of facts that it uses to refine its predictions. It is meant to improve the model's ability to remember hard facts. But now I'm thinking of doing that but supplying a database of nonsense phrases or other silly things
maybe you could use that technique to quickly fine-tune a network on some chosen data instead of needing to train it 🤔
Cybrespace is an instance of Mastodon, a social network based on open web protocols and free, open-source software. It is decentralized like e-mail.