for people wondering, there is a reason why #blindsoft image describer (both python and js) barely use embeds for text. the reason is that if you have a tts bot that you need to read text especially from bots, it will not be able to collect embed messages because TTS bots do not read json formatted embed text. at least, most tts bots, on voice channels. also, when i'm reading an image description, I do not want to have to go and constantly scroll through embeds to find what im' looking for. I want to be able to just have it ready. plus, I already get complaints that the image descriptions look like long paged essays, I wouldn't want users to complain more. note: I will be working on a future implementation of @alexchapman's galacticord to see if I can get it to read json formatted embed text, though I am currently not sure how possible or impossible it will be