I see that Prisma has taken the app world by storm. S is often to be found in a cave with a noise-cancelling headset on. Storms just pass him by. I hadn't heard of Prisma. Sounds interesting, but then a pointillist or fauvist rendition of Kim K would still remain a thing to avoid. I know nothing about the NN model used in Prisma. These Russians are too talented for their own good. In the end, it's all some matrix operation anyway! An image processing gal might be your best bet.
I had to turn the thread inside-out, upside-down to check if this has already been posted as trivia. There, there, China has Micius now.
Don't hunger for such renown. Here is my second entry. Reminds me of the debacle while training AI interactive bot TAY (Thinking of you). Back in March, TAY was prematurely released as chatterbot with a twitter handle by Microsoft. Within hours the deep learning curve plunged disastrously when TAY was able to mimic sordid behaviour from users who were hurling abuses. The chatty bot was still learning failing to filter out "unorthodox" inputs. It didn't know what to emulate, and the voracious learner that it was, it regurgitated everything thrown at it. Brings to the topic on how to induce comme il faut (as the lofty Frenchies put it) AI in machines. Can a machine ever differentiate 'good' learning from 'bad' learning? A very reductive and flattened cross-section of our brain is deemed as "organic computer". If a man can do it , so can a machine with plumbed switches and control systems. Can we ever simulate the emergent behaviour of our brain from these atomized components? Never know. However, while it lasted, TAY was coarsely promising as a glamorous yet riled Cylon. Meet TAY at @TayAndYou
Continuing our discussion on simulating human behaviour in machines, who is 5' something, 16 years, with turquoise pigtails, and will give Barbie a run for her money? That's Hatsune Miku. I've read about her few years back and recently she was trending again because of this ad featuring Miku and Scarlett Johansson. Her fans, who were used to watching her bob her twintails, went berserk on catching her for the first time with flowing hair. Can our Shreya Ghoshal and Adele be usurped by this avatar of voice- synthesiser (VS)? All you need is a melody and lyrics to feed into the VS and you have a silky rendition from these holographic vocaloids ready for a sell-out concert. The technology seems promising. The entertainment looks affordable. The singer replicated (Miku at New Year Bash in eight different cities). Is this the future?
I don't have anything to add to this high-profile technology post. But I have a question if anyone has an answer. I happened to read "Nano Technology of Mind over Matter" written by Rav Berg. In one of the chapters, he described the future of medicine as no surgeries other than for organ transplants and all of the ailments will be addressed by a traveling "Nanobot" in the bloodstream. So far, I have been hearing about diagnosing "Nanobot" research but did anyone hear about "drug delivery" in the bloodstream through "Nanobot" resulting in plumbing work to be carried out in the blood vessels, killing the tumor cells, etc. I am also very familiar with stem cell research. While ethical values are being discussed about cloning, is there a research to create an artificial organ that could replace the human organ instead of finding matching organs from a donor? Viswa
I saw a few samples in Museum of Science & Industry. The 3D samples matched perfectly with the original organs in size and content. But they are still in the lab stage and not yet ready for production and deployment. I wonder how would FDA system react to artificial organs and most importantly, how those artificial organs interact with other original organs? Viswa
Refer the attached wiki link, I'm no science expert but I think the info are available in sections if I remember reading right, if not in depth you can still get the basic idea. (ethical arguments are discussed too) About being in the lab phase, quoting from wiki...