socel.net is one of the many independent Mastodon servers you can use to participate in the fediverse.
Socel is a place for animation professionals, freelancers, independents, students, and fans to connect and grow together. Everyone in related fields are also welcome.

Server stats:

338
active users

#claude

5 posts5 participants0 posts today
Replied in thread

@gerrymcgovern #Claude and ALL #AI have an #ObjectiveFunction They attempt to mathematically minimize the number of errors An error is anything other than what will be considered a "correct response" There are hundreds I treat them like #SpecialNeedsStudents Some can see, others can not Some can remember others can not Some can fold proteins, most can not Gemini will not answer questions about #White #Conservative #Politicians Some even interpret songs Learn about AI youtu.be/M2VBmHOYpV8?feature=s

#NeuralNetwork #Photoshop #JavaScript #жаба #гадюка #СредаМоиЧуваки

#ВайбКодинг, говорите? Вот захотел я себе скрипт для Фотошопа. Ничего сложного, два цикла, но, допустим, я не погромист и тем более нюансов тамошней объектной модели не знаю. Решил сделать ИИшницу...

  • #Phind: понял всё с точностью до наоборот, сказал, что такой скрипт поставляется в комплекте (к последней версии, видимо), дал ссылку на Ютуп с нужной временной меткой (там девочка объясняет, как в шопе скрипты запускать), и как второй вариант — написал какой-то кусок кода, делающий не то.
  • #Claude, #Mistral: идею поняли, код написали, но не учли нюанс, что фотожоп может рулить слоями только в app.activeDocument, поэтому вываливается ошибка.
  • #Perplexity: поняла правильно, написала с каментами на русском и alert'ами уведомлений. Поменял только в цикле i++ на i--, чтобы было совсем логично и по красоте.
  • #ChatGPT: аналогично, код рабочий но чересчур избыточный. На вопрос «зачем» ответила, что, дескать, для перестраховки и упрощения дальнейшего раширения. После этого дала короткий вариант.

Выводов не будет, но пару моментов для себя отметил:

  • LLM-ки умеют в объектную модель адобовских продуктов; это хорошо.
  • Знать питон и жабаскрипт хотя бы на уровне «читаю по слогам» — это необходимо.

КДПВ #generated by #DALLE3

People have low expectations for AI. I asked ChatGPT (paid account, not the free stuff) what would make ChatGPT better. I think the answer is a lot higher quality than what people keep saying one should expect. This is what I got:

--- My Prompt:
With DeepSeek, there are things it is very good at, but it fails quickly and rather spectacularly. Your models tend to break down as well, but somewhere there must be a model that will be able to stay on topic, even when the topic changes, so that conversations move around like human conversations move around subject-to-subject. What would take ChatGPT over the top?

--- ChatGPT reply:
Here’s what it’ll take to get closer to that human-like conversation flow:

  1. Long-Term Memory That Actually Works. Most models either have no memory or they remember only in a clunky, artificial way.

A better memory system would not just recall facts, but contextual relevance—like “we talked about LLM efficiency two days ago and you hate cheerleader language.”

It has to be smart recall, not just data storage.

  1. Thread Awareness
    Imagine a model that can say: “You mentioned DeepSeek earlier. Should I compare it again now that we’ve brought up open-source tools?”
    That requires models to track not just words, but conversational branches—like a tree, not a straight line.

  2. Graceful Fallbacks
    When a model doesn’t know, it should gracefully pivot, not hallucinate or go off-topic. This kind of conversational “stability” is rare and subtle.

Most models that fail “spectacularly,” as you said, are trying to bluff through complexity instead of narrowing scope or clarifying.

  1. Hybrid Contextual ReasoningThe future model needs to reason across subjects rather than switching on and off.

If we're talking about language models, and you shift to science fiction, it should be able to bring up Asimov or neural networks in storytelling—bridging topics, not just hopping between them.

Right now, most models either:

Stay sharp but shallow (good at Q&A, bad at memory), or

Drift into friendly nonsense (chatty, but brittle)

The sweet spot will be a system that grows with you across time and topics—and doesn’t forget what you value in a conversation.

....

When you see stuff posted online meant to minimize the quality of output possible, This was a single question with no leader. I think this is the kind of answer we want from our AI devices.

Thoughts? Feel free to tear this response apart. It needs that kind of human analysis.

#ai#openai#deepseek

US politicians are unable to stand up to Donald Trump,
French Senator #Claude #Malhuret said,
after going viral for a speech in which he described #Trump as presiding over "Nero's court"
and billionaire top adviser Elon #Musk as a "buffoon on ketamine".

Republicans are afraid of reprisals while Democrats are still reeling from their presidential defeat, Malhuret said

By calling Donald Trump an “incendiary emperor”
and Elon Musk a “buffoon on ketamine,”
the 75-year-old senator of French center-right Horizons party has been catapulted overnight into the spotlight
thanks to an exceptionally viral video.
france24.com/en/live-news/2025

FRANCE 24 · US politicians can't stand up to Trump, says French senator who went viral for 'Nero' commentBy FRANCE 24
Continued thread

"Why do language models sometimes hallucinate—that is, make up information? At a basic level, language model training incentivizes hallucination: models are always supposed to give a guess for the next word. Viewed this way, the major challenge is how to get models to not hallucinate. Models like Claude have relatively successful (though imperfect) anti-hallucination training; they will often refuse to answer a question if they don’t know the answer, rather than speculate. We wanted to understand how this works.

It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit (see also this recent paper for related findings). This allows Claude to answer the question when it knows the answer. In contrast, when asked about an unknown entity ("Michael Batkin"), it declines to answer.

Sometimes, this sort of “misfire” of the “known answer” circuit happens naturally, without us intervening, resulting in a hallucination. In our paper, we show that such misfires can occur when Claude recognizes a name but doesn't know anything else about that person. In cases like this, the “known entity” feature might still activate, and then suppress the default "don't know" feature—in this case incorrectly. Once the model has decided that it needs to answer the question, it proceeds to confabulate: to generate a plausible—but unfortunately untrue—response."

anthropic.com/research/tracing

#WaPo We asked 5 #AI bots to write tough emails. One beat a human.

A panel of communications experts helped us test how well artificial intelligence tools ChatGPT, Claude, Copilot, DeepSeek and Gemini write emails.

5. Microsoft #Copilot : 23 out of 100
4. OpenAI’s #ChatGPT : 43 out of 100
3. Google’s #Gemini : 44 out of 100
2. #DeepSeek : 45 out of 100
1. Anthropic’s #Claude : 50 out of 100

#GiftedArticle

wapo.st/3FFo34V

The Washington Post · We asked 5 AI bots to write tough emails. One beat a human.By Geoffrey A. Fowler