- Oct 10, 2017
- 7,960
- 24,483
- 913
Yeah. We aren't pervs. We are disaster tourists.According to this forum, they have been collapsing every day of every year since forever.
Yeah. We aren't pervs. We are disaster tourists.According to this forum, they have been collapsing every day of every year since forever.
Who doesn't like a good 'dystopian future' tagged gameYeah. We aren't pervs. We are disaster tourists.
I could write an entire thesis on the fact that AI is here, it's staying, it's going to cause a paradigm shift in society, government, and business and there would still be people who haven't pulled their heads out of the sand and worked on ways to adapt instead of defeat it.I mean blaming capitalism instead of AI and NTR is a new take on the downfall of porn games.![]()
I agree. I'm on a business side and it has been abused by everyone everywhere. And it is still in its infancy. It will be amazing at the rate it's going and I think there are only a couple factors that could stop it.I could write an entire thesis on the fact that AI is here, it's staying, it's going to cause a paradigm shift in society, government, and business and there would still be people who haven't pulled their heads out of the sand and worked on ways to adapt instead of defeat it.
I'm on the business side as well. In fact, my company just held a seminar on the implementation of RAG (retrieval augmented generation) with LLMs, a method of alleviating hallucination in LLMs by allowing external calls to data and APIs. My company (which creates and maintains an LMS) has implemented Kendra LLM in a big way and it is NOT cheap.I agree. I'm on a business side and it has been abused by everyone everywhere. And it is still in its infancy. It will be amazing at the rate it's going and I think there are only a couple factors that could stop it.
1: cost to use it skyrockets beyond what people can afford- it's already stupid expensive for enterprise licensing for people or rate limited
2: it does something bad against powerful people who are very serious and it is unplugged/limited/banned
I very much hope I'm wrong, but I see this going the way of the calculator. A bunch of people who can't do the most basic of tasks on their own. And when you can't do the basic, you can't extrapolate into the advanced.I could write an entire thesis on the fact that AI is here, it's staying, it's going to cause a paradigm shift in society
sounds like we do some similar stuff.I'm on the business side as well. In fact, my company just held a seminar on the implementation of RAG (retrieval augmented generation) with LLMs, a method of alleviating hallucination in LLMs by allowing external calls to data and APIs. My company (which creates and maintains an LMS) has implemented Kendra LLM in a big way and it is NOT cheap.
To address your number 1, it seems there is going to be a bell curve with the cost like there is with other top technologies, like TVs were cheap, then they were stupid expensive as they evolved, and now they're stupid cheap again. This pattern seems to be happening again with LLMs due to several factors, most importantly, competition.
As for number 2, that's not going to happen. Simple as that.
You're looking at it wrong, as if AI were a product for consumption. It's not. AI is likely used in every site and every business you use invisibly. People don't "buy" AI in general. Are there products that people can use? Sure, comfyui, stable-diffusion, even DALL-E type generation, but that is a teeny tiny portion of AI. Your wishful thinking is just wishful thinking.I very much hope I'm wrong, but I see this going the way of the calculator. A bunch of people who can't do the most basic of tasks on their own. And when you can't do the basic, you can't extrapolate into the advanced.
Note that these are my beliefs. They're not going to change. I'm not trying to start an argument, just get my thoughts out there.You don't have permission to view the spoiler content. Log in or register now.
The hardware will get better and the hardware requirements will also lessen at a fixed rate until they meet somewhere close to (above or below possibly) the middle.with AI I'm not sure if the cost will go down until the hardware to run it also does on a mass scale.
Hallucination occurs only in LLMs because they don't natively have access to fact checking data and they are created to predict answers. The retrieval augmented generation I mentioned and parameters are being used to help alleviate this. Though it has a ways to go before it can be considered handled. You can even ask ChatGPT about its parameters in this regard and it will tell you that one of it's prompt based parameters (I.E. inserted into every prompt) is "If you don't know, don't say you know".companies aim to integrate it into everything everywhere and the hallucination problem rears its head in interesting situations when people using it aren't doing their good due diligence as a human to verify stuff before they copy and paste it as their own work
I'm a bit confused here. I thought I did the opposite of wishful thinking.You're looking at it wrong, as if AI were a product for consumption. It's not. AI is likely used in every site and every business you use invisibly. People don't "buy" AI in general. Are there products that people can use? Sure, comfyui, stable-diffusion, even DALL-E type generation, but that is a teeny tiny portion of AI. You're wishful thinking is just wishful thinking.
I'm a bit confused here. I thought I did the opposite of wishful thinking.![]()
^I very much hope I'm wrong, but I see this going the way of the calculator.
I guess it will depend on your model and training parameters and if you have some in house cooked system. Sounds like you may.You're looking at it wrong, as if AI were a product for consumption. It's not. AI is likely used in every site and every business you use invisibly. People don't "buy" AI in general. Are there products that people can use? Sure, comfyui, stable-diffusion, even DALL-E type generation, but that is a teeny tiny portion of AI. You're wishful thinking is just wishful thinking.
The hardware will get better and the hardware requirements will also lessen at a fixed rate until they meet somewhere close to (above or below possibly) the middle.
Hallucination occurs only in LLMs because they don't natively have access to fact checking data and they are created to predict answers. The retrieval augmented generation I mentioned and parameters are being used to alleviate this. You can even ask ChatGPT about its parameters in this regard and it will tell you that one of it's prompt based parameters (I.E. inserted into every prompt) is "If you don't know, don't say you know".
View attachment 4848534
I have a local LLM, pay for subscriptions to two LLMs (Kendra, though company pays this one, and OpenAI), have several versions of stable diffusion type generation, etc. I have a lot.I guess it will depend on your model and training parameters and if you have some in house cooked system. Sounds like you may.
Hallucination only happens in LLMs. Local small installations don't hallucinate. LLMs (large language models) don't have data to fact check with, which leads to the hallucination. The predictive nature of LLMs mixed with the way computers do math (you can look up the computer math courses online if you need clarification on that) lead to the jumbled numbers. AI just SUCKS at high level math, the last time I checked it was at less than 60% accuracy, lol.But hallucination can manifest in incorrect numbers, jumbled results that aren't quite summarized properly, and even social constructs coded into the system to give it some sort of bias or lean for or against certain qualifiers.
It's always benign due to the way computers work. There's no such thing as malignant hallucination. Wrong is just that, wrong.This can be more benign like if you plug it into a ticketing system and it scans past tickets to see if your specific problems were solved previously and gives you those options. But it can get worse when people plug it into an HRIS or BI data and it starts misfiring ever so slightly and people make bad decisions due to faulty data points they were led to believe is accurate.
To expand on this, I will say that people that use AI to do math are idiots in the first place, computers are great at math without using predictive, experimental thought machines.AI just SUCKS at high level math, the last time I checked it was at less than 60% accuracy, lol.
What bother me is that I can't anymore hide behind the fact that it will not happen during my life... I'll be retired before it starts to become a significant threat for my job, yet now I'm not totally sure...I could write an entire thesis on the fact that AI is here, it's staying, it's going to cause a paradigm shift in society, government, and business and there would still be people who haven't pulled their heads out of the sand and worked on ways to adapt instead of defeat it.
Yet they are too and will always be.You're looking at it wrong, as if AI were a product for consumption. It's not.
On this I agree, andHallucination occurs only in LLMs because they don't natively have access to fact checking data and they are created to predict answers.