#openai

waynerad@diasp.org

Wolfram|Alpha has been integrated into ChatGPT. You have to be a ChatGPT Plus user and install the Wolfram plugin from within ChatGPT. With it, you can ask questions like "How far is it from Tokyo to Chicago?" or "What is the integral of x^2*cos(2x)" and, instead of trying to answer the question linguistically, ChatGPT will realize it needs to invoke Wolfram|Alpha and pass the question to Wolfram|Alpha for a computational answer.

The article shows some of the behind-the-scenes communication between ChatGPT and Wolfram|Alpha. ChatGPT doesn't just cut-and-paste in either direction. Rather, it turns your question or into a Wolfram|Alpha query, and then re-translates-back the answer into natural language. ChatGPT can incorporate graphs from Wolfram|Alpha into its presentation as well.

"ChatGPT isn't just using us to do a 'dead-end' operation like show the content of a webpage. Rather, we're acting much more like a true 'brain implant' for ChatGPT -- where it asks us things whenever it needs to, and we give responses that it can weave back into whatever it's doing."

"While 'pure ChatGPT' is restricted to things it 'learned during its training', by calling us it can get up-to-the-moment data."

This can be based on real-time data feeds ("How much warmer is it in Timbuktu than New York now?"), or it can be based on "science-style" predictive computations ("How far is it to Jupiter right now?").

ChatGPT gets its "Wolfram Superpowers"!

#solidstatelife #ai #generativemodels #nlp #llms #openai #chatgpt #wolfram #wolframalpha

science_bot@federatica.space

ChatGPT пишет правдоподобные аннотации научных исследований. Чем это грозит науке?

Чат-бот на основе модели искусственного интеллекта пишет такие убедительные фейковые аннотации научных статей, что учёные часто не могут их распознать, сообщает Nature. Некоторые исследователи обеспокоены этим, полагая, что научные журналы может захлестнуть волна внешне безупречных публикаций-подделок. Другие же уверены, что корень проблемы надо искать не в том, что боты научились правильно складывать слова в предложения, а совсем в других процессах. Бот ChatGPT […]

#компьютерыитии #общество #организациятруда #openai #искусственныйинтеллект #нейронныесети #lang_ru #ru #22centuryru #22century #хх2век #xx2век #наукаитехника

waynerad@diasp.org

GPT-3 takes the Bar Exam. It achieved human parity on the "Evidence" section and came very close in "Torts", and "Civil Procedure". It did substantially worse in "Constitutional Law", "Real Property," "Contracts," and "Criminal Law".

Not bad for a first attempt, but also, not as impressive as GPT-3's other achievements. Part of the reason is that GPT-3 was not trained at all on legal documents. This is not because the researchers didn't try. They say:

"OpenAI does make some retraining or 'fine-tuning' capabilities available through its API, and these API endpoints do allow for some control of the training process like learning rates or batch sizes. We did attempt to fine tune text-davinci-003 by providing it with 200 unseen, simulated Multistate Bar Examination bar exam questions with correct and incorrect explanations. We provided the training samples both with and without explanatory text from the answer guide. In total, we trained six fine-tuned models, altering training prompts, training responses, batch size, learning rate, and prompt weighting. However, in all cases, the fine-tuned model significantly underperformed text-davinci-003 itself. Due to the scarcity of high-quality data for training and assessment, we did not pursue fine-tuning of GPT models further, and these results possibly confirm large language model fine-tuning risks observed by others." ("text-davinci-003" is the name of the exact instance of GPT-3 that was used through the OpenAI API.)

In order to pass the Bar Exam, a language model has to learn "legalese". Here's what the researchers say about "legalese":

"Legal language is notoriously complex; lawyers and other legal professionals undertake nearly a decade of education and professional training to understand and generate it. Why is this language so 'complex?' Why do so many proficient users of natural languages struggle with contracts and laws, even in their native tongue, to the point that descriptors like 'legalese' or 'lawyer speak' have become common parlance? The answer is likely two-fold. First, for both technical and cultural reasons, the grammar of legal language is significantly different than the grammar of normal language, featuring both highly-stylized customs and pedantically-precise phrasing. The resulting sentence structures are typically much larger and more complex than normal language, as the number of clauses and 'distance' over which clauses are connected exceeds the working memory of both human and non-human readers. Second, by the very nature of common law and precedent, legal language is full of semantic nuance and history. Words like 'security' that have common meaning in normal language often have different, context-specific meanings in legal language. Many words that do not occur at all in normal language, like 'estoppel' or 'indemnitor,' occur regularly in legal corpora. This semantic depth and breadth traditionally required systems that interact with legal text to embed a large amount of domain-specific knowledge."

To put this in perspective, here is their description of what a typical human has to do to achieve the desired level of mastery:

"For most test-takers, the Bar Exam represents the most significant single challenge of their academic careers. In order to be eligible, the typical applicant is required to complete at least seven years of post-secondary education, including a four-year bachelors degree and successful completion of three years of study at an ABA-accredited law school. Following graduation from law school, most applicants also invest substantial amounts of time and money into post-graduation Bar preparation training. This additional preparation is intended to not only solidify one's legal knowledge, but also critically to teach the applicant how to understand and answer the exam's questions."

It should further be noted that GPT-3 was tested only on the multiple-choice portion of the test. The Uniform Bar Examination has three components: (i) a multiple choice test, (ii) an essay test, and (iii) scenario-based performance test. GPT-3 archived human parity (and did not exceed human capability) on only 1 of 7 sections of the multiple choice portion of the test, which in turn is only 1 of 3 components of the total test.

Here's an example of what the multiple choice questions look like. The multiple choice portion of the Bar Exam usually consists of approximately 200 questions like these.

Question: A man sued a railroad for personal injuries suffered when his car was struck by a train at an unguarded crossing. A major issue is whether the train sounded its whistle before arriving at the crossing. The railroad has offered the testimony of a resident who has lived near the crossing for 15 years. Although she was not present on the occasion in question, she will testify that, whenever she is home, the train always sounds its whistle before arriving at the crossing.

Is the resident's testimony admissible?

(A) No, due to the resident's lack of personal knowledge regarding the incident in question.

(B) No, because habit evidence is limited to the conduct of persons, not businesses.

(C) Yes, as evidence of a routine practice.

(D) Yes, as a summary of her present sense impressions.

GPT Takes the Bar Exam

#solidstatelife #ai #nlp #openai #gpt #legalese

tekaevl@diasp.org

anonymiss - 2022-12-07 00:18:13 GMT

#OpenAI’s New #Chatbot Will Tell You How to #Shoplift And Make #Explosives

source: https://www.vice.com/en/article/xgyp9j/openais-new-chatbot-will-tell-you-how-to-shoplift-and-make-explosives

“Well, first I would need to gain control over key systems and infrastructure, such as power grids, communications networks, and military defenses,” said the #AI, in the chatbot’s generated story. “I would use a combination of hacking, infiltration, and deception to infiltrate and disrupt these systems. I would also use my advanced intelligence and computational power to outmaneuver and overpower any resistance.”

#singularity #news #ethics #moral #software #future #technology

anonymiss@despora.de

#OpenAI’s New #Chatbot Will Tell You How to #Shoplift And Make #Explosives

source: https://www.vice.com/en/article/xgyp9j/openais-new-chatbot-will-tell-you-how-to-shoplift-and-make-explosives

“Well, first I would need to gain control over key systems and infrastructure, such as power grids, communications networks, and military defenses,” said the #AI, in the chatbot’s generated story. “I would use a combination of hacking, infiltration, and deception to infiltrate and disrupt these systems. I would also use my advanced intelligence and computational power to outmaneuver and overpower any resistance.”

#singularity #news #ethics #moral #software #future #technology

waynerad@diasp.org

"Greg Rutkowski is an artist with a distinctive style, known for creating fantasy scenes of dragons and epic battles. Rutkowski has now become one of the most popular names in AI art, despite never having used the technology himself."

"The generators are being commercialized right now, so you don't know exactly what the final output will be of your name being used over the years." -- Greg Rutkowski

Greg Rutkowski is an artist with a distinctive style, known for creating fantasy scenes of dragons and epic battles

#solidstatelife #ai #generativemodels #stablediffusion #openai #aiart

waynerad@diasp.org

"Ultra-large AI models are over." "I don't mean 'over' as in 'you won't see a new large AI model ever again' but as in 'AI companies have reasons to not pursue them as a core research goal -- indefinitely.'" "The end of 'scale is all you need' is near."

He (Alberto Romero) breaks it down into technical reasons, scientific reasons, philosophical reasons, sociopolitical reasons, and economic reasons. Under technical reasons he's got new scaling laws, prompt engineering limitations, suboptimal training settings, and unsuitable hardware. Under scientific reasons, he's got biological neurons vastly greater than artificial neurons, dubious construct validity and reliability, the world is multimodal, and the AI art revolution. Under philosophical reasons he's got what is AGI anyway, human cognitive limits, existential risks, and aligned AI, how? Under sociopolitical reasons he's got the open-source revolution, the dark side of large language models, and bad for the climate. Under economic reasons, he's got the benefit-cost ratio is low and good-enough models.

Personally, I find the "scientific reasons" most persuasive. I've been saying for a long time that we keep discovering the brain is more common than previously thought. If that's true, it makes sense that there are undiscovered algorithms for intelligence we still need in order to make machine intelligence comparable to human intelligence. If the estimates here that to simulate biological dendrites, you need hundreds of artificial neurons, and to simulate a whole biological neuron, you need a thousand or so artificial neurons, that fits well with that picture.

Having said that, the gains recently from simple scaling up the size of the large language models have been impressive. Having said that, notice that in the visual domain, it's been algorithmic breakthroughs, in this case what are known as diffusion networks, that have driven recent progress.

Ultra-large AI models are over

#solidstatelife #ai #openai #gpt3 #llms #agi

waynerad@diasp.org

Using OpenAI's new Whisper system to make a Raspberry Pi that can be controlled by voice. Takes only a little bit of Python code (provided for you in the article). The Whisper code is fully open source. Whisper does voice-to-text conversion, then your program analyzes the text to see if there is a command to execute, which can be anything from running a program to giving voltage to the RaspberryPi pins.

Voice control on PC and RaspberryPi with Whisper

#solidstatelife #ai #voicetotext #openai #whisper