R Illustrating Reinforcement Learning from Human Feedback (RLHF)
New HuggingFace blog post on RLHF: https://huggingface.co/blog/rlhf
Motivated by ChatGPT and the lack of conceptually focused resources on the topic.
/r/MachineLearning
https://redd.it/zh2u3k
But where do folks actually attend church? [OC]
/r/dataisbeautiful
https://redd.it/zh4358
[OC] São Paulo cut its homicide rate by 90% and is now about as safe as Boston. Mexico City is currently safer than Dallas and Denver.
/r/dataisbeautiful
https://redd.it/zh2r3h
[OC] Largest mergers & acquisitions, inflation adjusted
/r/dataisbeautiful
https://redd.it/zgt8ye
[OC] How to spot misleading charts? I would like to hear your opinion on the subject, also any tips design-wise?
/r/dataisbeautiful
https://redd.it/zg7pck
R Large language models are not zero-shot communicators
Paper: Large language models are not zero-shot communicators (arXiv)
Abstract:
Despite widespread use of LLMs as conversational agents, evaluations of performance fail to capture a crucial aspect of communication: interpreting language in context. Humans interpret language using beliefs and prior knowledge about the world. For example, we intuitively understand the response "I wore gloves" to the question "Did you leave fingerprints?" as meaning "No". To investigate whether LLMs have the ability to make this type of inference, known as an implicature, we design a simple task and evaluate widely used state-of-the-art models. We find that, despite only evaluating on utterances that require a binary inference (yes or no), most perform close to random. Models adapted to be "aligned with human intent" perform much better, but still show a significant gap with human performance. We present our findings as the starting point for further research into evaluating how LLMs interpret language in context and to drive the development of more pragmatic and useful models of human discourse.
Authors: Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette
/r/MachineLearning
https://redd.it/zgr7nr
R What the DAAM: Interpreting Stable Diffusion and Uncovering Generation Entanglement
​
https://preview.redd.it/m2pg8yhahr4a1.png?width=2117&format=png&auto=webp&s=c6ef4cbef10f5d04045fb606e5123fb7a64f2ed5
Paper: What the DAAM: Interpreting Stable Diffusion Using Cross Attention (arXiv paper, codebase)
Abstract:
Large-scale diffusion neural networks represent a substantial milestone in text-to-image generation, but they remain poorly understood, lacking interpretability analyses. In this paper, we perform a text-image attribution analysis on Stable Diffusion, a recently open-sourced model. To produce pixel-level attribution maps, we upscale and aggregate cross-attention word-pixel scores in the denoising subnetwork, naming our method DAAM. We evaluate its correctness by testing its semantic segmentation ability on nouns, as well as its generalized attribution quality on all parts of speech, rated by humans. We then apply DAAM to study the role of syntax in the pixel space, characterizing head--dependent heat map interaction patterns for ten common dependency relations. Finally, we study several semantic phenomena using DAAM, with a focus on feature entanglement, where we find that cohyponyms worsen generation quality and descriptive adjectives attend too broadly. To our knowledge, we are the first to interpret large diffusion models from a visuolinguistic perspective, which enables future lines of research.
Authors: Raphael Tang, Linqing Liu, Akshat Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Pontus Stenetorp, Jimmy Lin, Ferhan Ture
/r/MachineLearning
https://redd.it/zgg7y7
Please help me explain to a student, in the simplest terms possible, what is wrong here.
/r/dataisugly
https://redd.it/zgcgay
[OC] Length of Time to Watch Professional Sports
/r/dataisbeautiful
https://redd.it/zgggp6
Countries with English-speaking Leaders, Europe (Dec. 2022)
/r/MapPorn
https://redd.it/zg21t5
The Average Age and Income of Home Buyers and Home Sellers in the 50 Biggest Metro Areas
/r/Infographics
https://redd.it/zg1z64
Turbāre circulōs meōs | 29-05-17 | by Xponentialdesign
/r/mathpics
https://redd.it/zcw6sc
Countries with completely free ($0 tuition fees) university education
/r/MapPorn
https://redd.it/zh0hvz
[OC] What forms of development do Americans support nationally and locally?
https://redd.it/zfyqom
@datascientology
[OC] Circular calendar showing how sunrise and sunset times change throughout the year
https://redd.it/zg83a5
@datascientology
Personal project for PhDs and scientists P
Hello!
I've developed a project NaimAI, to help PhDs and scientists in their scientific literaure review. To describe it brievely, it has 3 main features : 1 search in papers, 2 structures abstracts into objectives, methods and results and 3 generates automatically a (pseudo) literature review.
I wrote a yaassinekaddi/literature-review-with-naimai-open-sourced-fcbdb36762de">medium article that goes through the details.
Github repos : https://github.com/yassinekdi/naimai
I've created a subreddit in case : r/naimai4science
I'd be happy to have your opinion about it and hopefully this could be useful!
/r/MachineLearning
https://redd.it/zg3bsd
Length of bars means nothing. lol, nice, reddit.
/r/dataisugly
https://redd.it/zg6da4
D We're the Meta AI research team behind CICERO, the first AI agent to achieve human-level performance in the game Diplomacy. We’ll be answering your questions on December 8th starting at 10am PT. Ask us anything!
EDIT 11:58am PT: Thanks for all the great questions, we stayed an almost an hour longer than originally planned to try to get through as many as possible — but we’re signing off now! We had a great time and thanks for all thoughtful questions!
PROOF: https://i.redd.it/8skvttie6j4a1.png
We’re part of the research team behind CICERO, Meta AI’s latest research in cooperative AI. CICERO is the first AI agent to achieve human-level performance in the game Diplomacy. Diplomacy is a complex strategy game involving both cooperation and competition that emphasizes natural language negotiation between seven players. Over the course of 40 two-hour games with 82 human players, CICERO achieved more than double the average score of other players, ranked in the top 10% of players who played more than one game, and placed 2nd out of 19 participants who played at least 5 games. Here are some highlights from our recent announcement:
NLP x RL/Planning: CICERO combines techniques in NLP and RL/planning, by coupling a controllable dialogue module with a strategic reasoning engine.
Controlling dialogue via plans: In addition to being grounded in the game state and dialogue history, CICERO’s dialogue model was trained to be controllable via a set of intents or plans in the game. This allows CICERO to use language intentionally and to move beyond imitation learning by conditioning on plans selected by the strategic reasoning engine.
Selecting plans: CICERO uses a strategic reasoning module to make plans (and select intents) in the game. This module runs a planning algorithm which takes into account the game state, the dialogue, and the strength/likelihood of various actions. Plans are recomputed every time CICERO sends/receives a message.
Filtering messages: We built an ensemble of classifiers to detect low quality messages, like messages contradicting the game state/dialogue history or messages which have low strategic value. We used this ensemble to aggressively filter CICERO’s messages.
Human-like play: Over the course of 72 hours of play – which involved sending 5,277 messages – CICERO was not detected as an AI agent.
You can check out some of our materials and open-sourced artifacts here:
Research paper
[Project overview](https://ai.facebook.com/research/cicero/)
Diplomacy gameplay page
[Github repo](https://github.com/facebookresearch/diplomacy_cicero)
Our latest blog post
Joining us today for the AMA are:
Andrew Goff (AG), 3x Diplomacy World Champion
Alexander Miller (AM), Research Engineering Manager
Noam Brown (NB), Research Scientist [(u/NoamBrown)](https://www.reddit.com/user/NoamBrown/)
Mike Lewis (ML), Research Scientist (u/mikelewis0)
David Wu (DW), Research Engineer [(u/icosaplex)](https://www.reddit.com/user/icosaplex/)
Emily Dinan (ED), Research Engineer
Anton Bakhtin (AB), Research Engineer
Adam Lerer (AL), Research Engineer
Jonathan Gray (JG), Research Engineer
Colin Flaherty (CF), Research Engineer (u/c-flaherty)
We’ll be here on December 8, 2022 @ 10:00AM PT - 11:00AM PT.
/r/MachineLearning
https://redd.it/zfeh67
[OC] How media divides us: MSNBC vs Fox News - What stories or topics are they pushing over the last week (Dec 1st to Dec 8th)? How do they compare to Reuters?
https://redd.it/zg1ezn
@datascientology
Judea Pearl, a pioneering figure in artificial intelligence, long argued that AI has been stuck in a decades-long rut because of our struggles digitising causal reasoning. That's why the outcome of this basic test is sending chills down my spine.
/r/datascience
https://redd.it/zfrynz