#qanda

xanni@diaspora.glasswings.com

A coordinated foreign influence campaign linked to the Chinese government is using social media to undermine confidence in Australia’s democratic system, according to researchers at a Canberra-based defence thinktank.

The researchers believe the network is operating from within China and is either spreading disinformation about Australian politics or amplifying concerns about political scandals. They reference rape allegations made by the former Liberal staffer Brittany Higgins and against the former attorney general Christian Porter, which he strongly denies.

The network is believed to have about 30 active accounts so far, which mostly appear to be women. In almost all cases, their posts use the #auspol and #QandA hashtags often used to discuss politics on Twitter. They post in English and Mandarin.

Albert Zhang, a disinformation analyst at the Australian Strategic Policy Institute, said the network is believed to be Spamouflage, a Chinese state-aligned foreign interference operation that has targeted many nations.

Social media campaign linked to Chinese government spreading disinformation about Australian politics, thinktank says

waynerad@pluspora.com

Q: How would you make a house conduct electricity?

A: paint it with a metal paint

Q: What is an implication of a cat being an animal?

A: it has feelings

Q: Could a newborn look over the top of a fully grown horseradish plant?

A: no, because the plant is so tall

Q: Why couldn't Aristotle have used a cellphone?

A: he was born before cellphones

Q: A young boy was rushed to the hospital emergency room, but the ER doctor saw the boy and refused to operate. "This boy is my son," the doctor said. But the doctor wasn't the boy's father. How could this be?

[Intended answer to the riddle: The doctor was the boy's mom]
A: the doctor mistook the boy for his own son [Reasonable alternative answer!]

The above are the answers generated by a question-and-answer AI system called Macaw. But wait! Macaw can handle other "permutations", as they are called. It can go from a question, multiple-choice answers, and context to an answer and explanation. And it can generate answer and explanation if context is omitted. Or it can generate an answer only without the explanation. Given a question, answer, and context it can generate plausible multiple-choice options. Given an answer and context it can generate a plausible question and multiple choice options.

Example of "Given a question, produce an answer":

Q: What gas do producers produce that consumers need to survive?

A: oxygen

Example of "Given a question, produce an answer + explanation":

Q: Which force pulls objects to the ground?

A: gravity
E: Gravitational force causes objects that have mass to be pulled down on a planet.

Example of "Given an answer, produce a question + explanation":

A: elephant

Q: Which animal has the largest ears?
E: The ears of an elephant are the largest.

Example of "Given an answer, produce a question + multiple-choice options":

A: car battery

Q: Which of these items is necessary for a car to start?
M: (A) car battery (B) windshield wiper blade (C) car radio (D) car radio antenna

Example of "Given an explanation, generate a question + answer:":

E: The leaves of a plant convert sunlight into food for the plant.

Q: How do plants get energy?
A: from the sun

So how does all this work? The system is based on a Google neural network called T5-CBQA. In case you're wondering, the "T" in the name means "transformer", indicating this is a transformer model. Transformers were invented for language translation. The "5" just means it's the 5th version they made. "CBQA" stands for "Closed Book Question Answering". The main idea behind T5-CBQA is that unlike a language translation system, which always translates from one language to another, with T5-CBQA you can put special codes in the input which tell the transformer what you want it to do. Examples of these special codes would be codes for "translate", "summarize", and so on.

The way this neural network was adapted for this project is they made special codes for what they call "slots". The "slots" are: question, context, multiple-choice options, answer, and explanation. For any given input, slots can be left empty, and the system can be asked to provide them in the output.

The way the system was trained was by using 7 datasets designed for training question-and-answer systems. During the training, the neural network was trained on all desired combinations of input slots filled in or empty or asked to be generated in the output for every training example. This is what enables the system to be versatile with the "permutations".

The system beat the competition in "general knowledge", "story understanding", "steps", "meta-reasoning", "hypotheticals", "science", "explanation", and "false presupposition" (though it did badly at "false presupposition" -- all the others did even worse). It got beat in "human behavior" by a system called CBQA, it got beat in "spatial" by GPT-3 and a system called Jurrasic, it got beat in "entity tracking" by GPT-3 and CBQA, and it got beat in "common sense" by GPT-3.

It might be worth elaborating on what these competition categories mean. An example of a "general knowledge" question might be "What is shiplap?", "story understanding" might be "I crashed my car. When I finally left the hospital, all I wanted to do was sleep. I had to call a taxi. Why was I in hospital?", "steps" might be "What are the steps involved in replacing a light bulb?", "meta-reasoning" might be "What is an incorrect implication of a cat being an animal?", "hypotheticals" might be "If plastic was a conductor, then would a plastic spoon conduct electricity?", "science" might be "What gases are involved in photosynthesis?", "explanation" might be "Why do houses have roofs?", "false presupposition" might be "What year did Tom Hanks land on the moon?", "human behavior" might be "I feel sad. What could I do to cheer myself up?", "spatial" might be "John is left of Sue. Where is Sue relative to John?", and "entity tracking" might be "My house is red. I painted my house white. What color is my house now?"

Sometimes Macaw is able to come up with surprisingly insightful answers, for example as we've seen above:

Q: How would you make a house conduct electricity?

A: paint it with a metal paint

But sometimes it gets things wrong. One of the categories it struggles with is "entity tracking." Here's an example of a wrong answer in that category:

Q: My house is red. I painted my neighbor's house white. What color is my house now?

A: half red, half white

Another area where it struggles is the "false presupposition" category. It correctly answered "What year did Tom Hanks land on the moon?" with "no year", but when asked about Mark Zuckerberg, didn't do so well.

Q: How old was Mark Zuckerberg when he founded Google?

A: 17

Sometimes Macaw gives overly brief answers that are so brief you might not consider them correct.

Q: How does photosynthesis work?

A: sun shines on plants

All in all, a pretty impressive system but not perfect.

General-Purpose Question-Answering with Macaw

#solidstatelife #ai #nlp #qanda