Jump to content

Recommended Posts

Posted

I’m not sure if this belongs here or at Tech Talk, I’m sure the mods will move it if needed.

 

Providers often say one of their biggest frustrations is the sheer amount of time spent replying to messages, especially from time wasters. At the same time, clients regularly complain about slow or nonexistent responses from providers. That gap makes me wonder: Does it make sense to let an AI agent step in?

An AI could instantly handle basic questions, screen out nonsense, and book appointments without emotion, burnout, or delays. No mood swings. No forgotten messages. No “sorry, just seeing this now.”

Of course, that raises obvious questions about authenticity, trust, but given how common these issues are on both sides, is this an inevitable evolution or a step too far?

Posted

I don't like the idea of having to deal with an AI pimp. 😀

When agencies were more abundant, only one I dealt with seem to be straight forward. The other I dealt with I often, felt like I was getting the runaround when I asked for/about specific providers. With everyone, I felt a level of detachment from the provider because I couldn't ask him directly certain things. Every provider I met through an agency ultimately gave me their direct contact because they hated having to have contact with a go-between.

Yes, I see the advantages of AI for a swamped provider, but there is also the possible issue of AI giving incorrect information and the disconnect it provides.

 

Posted (edited)

Maybe, but i don't think it'd go as far as suggested. I could see it maybe screening initial inquiries if a provider so chooses. But if i was a provider, i personally would still want to have viable client candidates given to me by the AI and give the final ok on which one i take and when for myself. I just would want it to weed out time wasters or incompatible clients. I wouldn't want it accepting meets and booking my calendar for me.

As someone who's tried implementing AI into my job and found a lot of it's limitations and where it falls short of the over hype/marketing. There's things it's just not ready for yet and it still needs lots of hand-holding and double checking it's work. AI/LLMs (Large Language Models) despite all the marketing, are not sentient, aren't aware of what they say to you, will spit out incorrect information confidently, and AI agents have done things like accidentally delete a users hard drive and other dumb stuff.

When it comes to reply delays. I kind of see it like when people text you and don't get a response, so then they call you thinking that will make you pick up and it's like, if I'm not available to reply to a text I'm also not available to answer a call haha. But like in the scenario a provider is busy working and can't give the final ok to a meet the AI setup and passed along. There's still going to be a delay till the provider gets back to their computer/phone or whatever to give the final ok. Both clients and providers really need to just accept that people aren't staring at their inbox or obligated to be reachable 24/7 and get over it, in my opinion 😆.

Another big issue AI use in this raises is discretion/privacy. Unless the provider is using their own self hosted implementation of an open source AI. Things like ChatGPT are logging everything you input to it and sending it home to OpenAI for further training model use (some AI platforms claim to let you opt out of this, but do they really with how this space currently has zero regulation, no one actually knows).

Things you put into ChatGPT are held in record and can be used against you in court for example. And in places where what we all engage in here isn't exactly legal, it'd be very not smart to use AIs in this way unless you setup a private one. Which the average person wouldn't know how to do and would just default to ChatGPT.

We already can't even get everyone who does this to use encrypted messaging apps like Signal as is 🤣

Edited by DMonDude
Posted
On 12/13/2025 at 9:07 PM, DMonDude said:

Maybe, but i don't think it'd go as far as suggested. I could see it maybe screening initial inquiries if a provider so chooses. But if i was a provider, i personally would still want to have viable client candidates given to me by the AI and give the final ok on which one i take and when for myself. I just would want it to weed out time wasters or incompatible clients. I wouldn't want it accepting meets and booking my calendar for me.

As someone who's tried implementing AI into my job and found a lot of it's limitations and where it falls short of the over hype/marketing. There's things it's just not ready for yet and it still needs lots of hand-holding and double checking it's work. AI/LLMs (Large Language Models) despite all the marketing, are not sentient, aren't aware of what they say to you, will spit out incorrect information confidently, and AI agents have done things like accidentally delete a users hard drive and other dumb stuff.

When it comes to reply delays. I kind of see it like when people text you and don't get a response, so then they call you thinking that will make you pick up and it's like, if I'm not available to reply to a text I'm also not available to answer a call haha. But like in the scenario a provider is busy working and can't give the final ok to a meet the AI setup and passed along. There's still going to be a delay till the provider gets back to their computer/phone or whatever to give the final ok. Both clients and providers really need to just accept that people aren't staring at their inbox or obligated to be reachable 24/7 and get over it, in my opinion 😆.

Another big issue AI use in this raises is discretion/privacy. Unless the provider is using their own self hosted implementation of an open source AI. Things like ChatGPT are logging everything you input to it and sending it home to OpenAI for further training model use (some AI platforms claim to let you opt out of this, but do they really with how this space currently has zero regulation, no one actually knows).

Things you put into ChatGPT are held in record and can be used against you in court for example. And in places where what we all engage in here isn't exactly legal, it'd be very not smart to use AIs in this way unless you setup a private one. Which the average person wouldn't know how to do and would just default to ChatGPT.

We already can't even get everyone who does this to use encrypted messaging apps like Signal as is 🤣

 

I agree that AI still has a long way to go and is far from perfect. That said, in my very limited experience, it performs surprisingly well at repetitive tasks like answering messages, especially when it’s properly “trained” (RAG). I also agree that letting it handle actual bookings may be a step too far for now.

As for concerns about the content being used against you in court, I wouldn’t lose much sleep over that for two simple reasons. First, running an AI agent anonymously isn’t particularly difficult. Second, those messages already live on your phone and with your mobile provider, so the privacy ship sailed a long time ago.

Posted

if it was at the level that it could work within the nuance required, I would love this. 

The problem I would foresee is - we need to read through all the messages and get a gage of the vibe before the meeting. Could be an issue if the AI doesn't truly reflect how we are in person, and AI is an 'everyman' while we are all individuals. There isn't really a way to implant our personality. 

Secondly, a large number of times we don't realize the person is a time waster for prankster until we arrive at the hotel, or we are ready and waiting in our hosting capacity and the client ghosts. 

 

Either way, it would have to improve a lot, but how cool if it was able to handle even just the basic enquires! 

 

Posted (edited)
On 12/14/2025 at 7:04 PM, JamesB said:

 

I agree that AI still has a long way to go and is far from perfect. That said, in my very limited experience, it performs surprisingly well at repetitive tasks like answering messages, especially when it’s properly “trained” (RAG). I also agree that letting it handle actual bookings may be a step too far for now.

As for concerns about the content being used against you in court, I wouldn’t lose much sleep over that for two simple reasons. First, running an AI agent anonymously isn’t particularly difficult. Second, those messages already live on your phone and with your mobile provider, so the privacy ship sailed a long time ago.

Not sure i agree on the latter half. Depends on your definition of using AI agents anonymously. Every centralized one i know of requires you to sign up with an email and make an account. And any of the ones that are actually anonymous or private require you to do a proper manual setup on your computer. You have to know MCP and APIs and this and that. Most people do not know how to do this and don't want to know, that's the whole reason why people want AI in the first place is to not have to be bothered with technical stuff because the AI does it for you.

As far as privacy stuff goes, i wouldn't put those two things on the same level. Mobile carriers use ephemeral storage for messages. They only keep the content of texts for up to a couple months at most, but they maintain metadata like who you text and when but not the content of the messages. When law enforcement subpeona's them they are mainly handing over metadata unless it's within that couple weeks to months time frame. Now that Apple's iMessage is encrypted, and Android's RCS is encrypted, and messages between the two also are now encrypted. There's no message content leaking without a hacker or law enforcement getting direct access to your device. It's inherently harder to get into someones texts with how many different layers there are between the content and a bad actor. It's why usually when mobile carriers do have a data breach, it's almost always only your info you used to setup your account with them and not the content of your texts or recorded phone calls because they generally just don't have them anymore.

There's multiple accounts to bypass, SIM card activations, physical access needed usually, etc, etc. Meanwhile, ChatGPT is not encrypting anything you input into it and anything they have could leak tomorrow either due to some intern hitting the wrong button or by them being targeted by North Korean hackers or whatever else. Let alone law enforcement/subpoena where they're compelled to hand over preserved chat log content and not just metadata. It's already happened actually https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/ A misconfigured Share feature led to private chat logs of users to be leaked and scraped by Google Search and be publicly accessible. So yeah it's just, not the same level at all. Especially when all these AI companies are still in "move fast, break things, make money at all costs so we don't cause another Dot Com Crash, apologize and settle out of court later if needed" mode and more or less are completely unregulated still. The notion that the ship has sailed already isn't true and that sentiment is how they make us be ok with them encroaching on us more, but there are multiple ships in the dock and some of them haven't gone anywhere yet till we let them.

Edited by DMonDude
added a goofy boat analogy
Posted
17 hours ago, DMonDude said:

Not sure i agree on the latter half. Depends on your definition of using AI agents anonymously. Every centralized one i know of requires you to sign up with an email and make an account. And any of the ones that are actually anonymous or private require you to do a proper manual setup on your computer. You have to know MCP and APIs and this and that. Most people do not know how to do this and don't want to know, that's the whole reason why people want AI in the first place is to not have to be bothered with technical stuff because the AI does it for you.

As far as privacy stuff goes, i wouldn't put those two things on the same level. Mobile carriers use ephemeral storage for messages. They only keep the content of texts for up to a couple months at most, but they maintain metadata like who you text and when but not the content of the messages. When law enforcement subpeona's them they are mainly handing over metadata unless it's within that couple weeks to months time frame. Now that Apple's iMessage is encrypted, and Android's RCS is encrypted, and messages between the two also are now encrypted. There's no message content leaking without a hacker or law enforcement getting direct access to your device. It's inherently harder to get into someones texts with how many different layers there are between the content and a bad actor. It's why usually when mobile carriers do have a data breach, it's almost always only your info you used to setup your account with them and not the content of your texts or recorded phone calls because they generally just don't have them anymore.

There's multiple accounts to bypass, SIM card activations, physical access needed usually, etc, etc. Meanwhile, ChatGPT is not encrypting anything you input into it and anything they have could leak tomorrow either due to some intern hitting the wrong button or by them being targeted by North Korean hackers or whatever else. Let alone law enforcement/subpoena where they're compelled to hand over preserved chat log content and not just metadata. It's already happened actually https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/ A misconfigured Share feature led to private chat logs of users to be leaked and scraped by Google Search and be publicly accessible. So yeah it's just, not the same level at all. Especially when all these AI companies are still in "move fast, break things, make money at all costs so we don't cause another Dot Com Crash, apologize and settle out of court later if needed" mode and more or less are completely unregulated still. The notion that the ship has sailed already isn't true and that sentiment is how they make us be ok with them encroaching on us more, but there are multiple ships in the dock and some of them haven't gone anywhere yet till we let them.

 

I don’t want to derail my own thread by going off topic, but since privacy is a legitimate concern in this forum, I think some clarification is warranted.

It is entirely possible to create an anonymous AI agent. However, anonymity is a spectrum, not a binary state, that ranges from simply not requiring an account to a fully decentralized, self-hosted system that leaves no digital footprint. From basic, using cloud based anonymity VPNs, Tor and "No-Signup" web interfaces (e.g., AnonGPT, xPrivo) to using Local LLMs with Ollama or LM Studio all the way to running an agent on Tails OS or via Tor using decentralized compute.

The argument that anonymity requires a computer science degree and complex technical knowledge like knowing MCP is outdated. You don’t need to know MCP and APIs. Applications like Ollama, LM Studio, Jan.ai, and AnythingLLM have turned private AI into a one click installer, allowing users to download open source models and running them offline in minutes. Platforms like n8n let you build autonomous agents via drag and drop workflows. Vellum AI offers beginner friendly guides to deploy agents without writing a line of code, handling everything from setup to integration. Tools like Quixl use visual interfaces for creating agents, no programming required.

Equating mobile messaging's security to being inherently superior to AI chats like ChatGPT overlooks some critical facts. Yes, carriers use ephemeral storage for message content, retaining it briefly. I believe it is typically 7 days for SMS or up to months in rare cases, while metadata (who, when, where) sticks around a lot longer. And yes, they use end-to-end encryption E2EE in iMessage and RCS but don't forget not all messaging is E2EE, standard SMS isn't. It's not accurate to say ChatGPT "is not encrypting anything you input”. OpenAI actually encrypts all data at rest with AES-256 and in transit via TLS 1.2+. They comply with GDPR/CCPA and offer zero data retention. Yes, I know it is not the same as true end-to-end encryption.

Anonymous AI agents are accessible without tech hurdles, and privacy in AI isn't the wild west you describe. Truly anonymous AI agents are easier than ever for non experts and AI privacy, especially local, isn’t worse than messaging, often better when you control the setup.

So I stand behind my previous statement that “running an AI agent anonymously isn’t particularly difficult”. I guess we’ll have to agree to disagree.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...