🤖 The #1 AI news source! We cover the latest artificial intelligence breakthroughs and emerging trends. Contact: @CaptainJamesCook
🚨 Senator Laphonza Butler thinks supporting Big AI or human workers is a ‘false choice’
🟦 Representing California in Congress presents a unique challenge, particularly in the realm of technology and artificial intelligence (AI). Current California Senator Laphonza Butler and Vice President Kamala Harris, who previously held the Senate seat, are both navigating the complexities of national politics while addressing the interests of the state's significant tech industry. As AI becomes an increasingly important topic, Congress has struggled to establish a national regulatory framework. In her role, Harris has taken on the responsibility of leading discussions on AI regulation within the Biden administration, while Butler focuses on the impact of AI on labor and social equity.
🟦 Butler emphasizes the need for a balanced approach to AI regulation, aiming to protect Americans from potential risks while fostering innovation. She commends the efforts of Senate Majority Leader Chuck Schumer and the Biden administration for creating inclusive discussions that bring together labor leaders, civil society representatives, and AI industry executives. According to Butler, effective policymaking involves listening to all stakeholders to ensure that the interests of both AI companies and the workforce are considered. This sentiment is echoed by California State Senator Scott Wiener, who faced challenges in passing a state-level bill aimed at whistleblower protections for AI companies, asserting that innovation and safety can coexist.
🟦 While Butler acknowledges the progress made in regulating AI, she insists that more work is necessary. Schumer has outlined a roadmap for AI policy, and the White House has secured voluntary commitments from AI companies to ensure safer development practices. One of Butler’s key initiatives is the Workforce of the Future Act, which aims to study AI's impact on various job sectors and create a $250 million grant program to equip workers with necessary skills. Butler believes that preparing both the current and future workforce is essential for maximizing the benefits of AI deployment.
🟦 Butler views this moment as an opportunity for policymakers to proactively address the inevitable disruptions brought by AI, striving to create equitable opportunities that stabilize the economy. However, she remains realistic about the legislative landscape, acknowledging the impending challenges posed by the upcoming presidential election and the need for further dialogue among diverse stakeholders before comprehensive AI legislation can be advanced. With Congress nearing the end of its session, Butler emphasizes the importance of engaging with different perspectives on this critical issue.
Source | Artificial intelligence 🤖
😫
Source | Artificial intelligence 🤖
The UNITREE H1 robot fails while in action 😳
Source | Artificial intelligence 🤖
🚨 Meta unveils AI video generator, taking on OpenAI and Google
🔹 Meta Platforms Inc., the parent company of Facebook, has launched a new artificial intelligence tool called Movie Gen, which can generate or edit videos based on simple text prompts. This development intensifies competition with other tech giants like OpenAI and Google in the race to create advanced AI technologies. Movie Gen can produce videos up to 16 seconds long and can also generate audio or edit existing videos using text prompts or photos of real people. Currently, the tool is available to select internal employees and a few external partners, including filmmakers, with plans to integrate it into Meta's existing apps next year.
🔹 Meta executives, including Vice President Connor Hayes, aim for Movie Gen to enhance user engagement by making video creation and editing more accessible and enjoyable. Although the specific integration plans are still under discussion, Hayes emphasized the tool's potential to encourage creativity among users. This initiative reflects a broader trend among major tech companies to develop AI models that can generate videos, which present more complex challenges compared to those generating text.
🔹 One of the hurdles Meta faces is the current inefficiency of the technology, as generating a video from a text prompt takes "tens of minutes," making it impractical for general consumer use on mobile devices. Additionally, Meta is working on addressing critical safety and ethical concerns, particularly regarding personalized videos. The company aims to prevent users from creating inappropriate or unflattering videos of others without their consent, which executives view as a significant issue to resolve before wider release.
🔹 Meta's commitment to AI advancements is underscored by CEO Mark Zuckerberg's focus on AI as a central driver of user engagement and revenue growth. In the short term, AI has already improved the relevance of content algorithms, enhancing the user experience on the platform. Looking ahead, Zuckerberg envisions AI playing an even more substantial role in powering Meta's applications and future technologies, such as smart glasses, as the company continues to explore the potential of AI in various domains.
Source | Artificial intelligence 🤖
Good at AI? Show your best – participate in the international AI Journey Contest. The award fund is over USD $87,000! 🤩
The tasks are grand and ambitious. Participants will work with SOTA technologies, choosing one or more of the proposed tasks:
✔️ Emotional FusionBrain 4.0 — create a multimodal model that understands videos brilliantly, answers complex questions, and recognizes human emotions.
✔️ Multiagent AI — develop a multi-agent RL system where agents will form different cooperation schemes to solve tasks. This challenge is extremely valuable for scientific research.
✔️ Embodied AI — create an assistant robot that will solve complex tasks involving interaction with the environment and humans, communicating in natural language.
✔️ E-com AI Assistant — using the LLM GigaChat, create an AI assistant that can recommend relevant products for purchase on the Megamarket marketplace to users.
Be the one to boost the AI growth! 🫵🏻
Follow the link, register and get ready to complete the tasks by October 28!
🚨 OpenAI researcher Noam Brown confirms that reaching AGI is their main goal
Source | Artificial intelligence 🤖
🚨 New AI glasses reveal personal info like name, address, just by looking at people
🔴 Harvard students have developed software for smart glasses that can identify individuals by recognizing their faces, subsequently retrieving personal information such as names, addresses, occupations, and online photos. This technology utilizes a camera-equipped pair of glasses that captures images of faces and accesses data from facial recognition sites like PimEyes and FaceCheck, employing neural networks to compile extensive information from across the internet.
🔴 The implications of this technology are concerning, as it can even potentially expose sensitive information like Social Security numbers, which could facilitate identity theft and unauthorized access to financial accounts. While the students behind this innovation have not disclosed the code, the existence of such deanonymization capabilities in public spaces raises significant privacy and ethical concerns.
Source | Artificial intelligence 🤖
🚨 OpenAI asks investors not to back rival start-ups such as Elon Musk’s
🔹 OpenAI is reportedly urging investors to refrain from supporting rival startups like Anthropic and Elon Musk's xAI, as the company aims to solidify its dominance in the generative AI space. Led by CEO Sam Altman, OpenAI is nearing the completion of a $6.5 billion funding round, which is expected to value the company at $150 billion. By seeking exclusive funding arrangements, OpenAI intends to limit its competitors' access to capital and strategic partnerships, a strategy that may exacerbate tensions with rivals, particularly Musk, who is currently suing OpenAI.
🔹 The exclusivity OpenAI desires is notable given the typical practices in venture capital, where many firms, including Sequoia Capital and Andreessen Horowitz, commonly invest in multiple competitors within a sector. A partner at a leading VC firm observed that while it has long been an unwritten rule not to back rival companies, enforcing exclusivity is unusual. This approach may reflect OpenAI's efforts to maintain its competitive edge, reminiscent of Uber's past tactics when it sought to dominate the ride-sharing market.
🔹 Thrive Capital, founded by Josh Kushner, is leading the current funding round and has pledged $1 billion. Other investors, including Khosla Ventures, SoftBank, and Calpers, are also expected to participate, potentially raising significant capital through special purpose vehicles (SPVs). Meanwhile, strategic investors like Microsoft, Nvidia, and Apple have shown interest in contributing to this funding round, further indicating the competitive landscape within the AI industry.
🔹 In addition to the funding efforts, OpenAI is undergoing a corporate restructuring to move away from its non-profit origins, which would allow investors to benefit more from any future profits. Altman has discussed taking an equity stake in this new for-profit structure but has dismissed reports suggesting he would receive a 7% stake worth over $10 billion as "ludicrous." Musk has been critical of OpenAI's shift towards commercial partnerships, alleging that the company has strayed from its original mission to benefit humanity, and is pursuing legal action to void its agreement with Microsoft, which is now under scrutiny from antitrust regulators.
Source | Artificial intelligence 🤖
❌ Read Microsoft’s optimistic memo about the future of AI companions
❗️ Microsoft has unveiled a redesigned version of its AI assistant, Copilot, as part of a broader initiative to establish a more personalized AI companion. Mustafa Suleyman, the newly appointed CEO of Microsoft’s AI division, highlighted this shift in a detailed memo, emphasizing a "technological paradigm shift" where AI models can increasingly understand human inputs. Suleyman, who joined Microsoft after being a prominent figure at Inflection AI, has stirred controversy with his earlier claims regarding the nature of online content, but now expresses optimism about AI creating a supportive technological environment.
❗️ In his memo, Suleyman outlines the vision for Copilot, describing it as a dynamic AI companion that adapts to users' preferences and needs over time. He stresses that technology should enhance human well-being and foster deeper connections rather than merely focusing on technical specifications. The updated Copilot is designed to be a helpful presence in users' lives, assisting with daily tasks, providing valuable insights, and offering emotional support when needed.
❗️ Suleyman further elaborates on the practical applications of the AI assistant, suggesting that Copilot will be capable of acting on users' behalf in significant moments, such as taking notes during medical appointments or helping with personal planning. He asserts that the goal of this technology is not to diminish human uniqueness but to enrich lives and strengthen social bonds. By safeguarding privacy and security, Copilot aims to understand users' contexts while simplifying the complexities of daily life.
❗️ The initiative represents an early step in what Suleyman describes as a long journey toward a new era of technology that prioritizes user experience and societal impact. He emphasizes the importance of accountability, patience, and collaboration with users throughout this process. Suleyman’s commitment to compassion and respect for users is framed as central to the mission, marking the beginning of significant changes in the AI landscape designed to support and enhance human experiences.
Source | Artificial intelligence 🤖
Sam Altman may get 7% stake as OpenAI eyes for-profit status. $150B valuation on the horizon. Non-profit board to step back.
Source | Artificial intelligence 🤖
🚨 Meta CEO Mark Zuckerberg predicts smart glasses will replace phones by 2030
" Smart glasses are going to become the next major computing platform.
They will gradually replace phones by 2030, much like mobile devices surpassed computers without fully replacing them "
Source | Artificial intelligence 🤖
This is how German build houses
Artificial intelligence 🤖
🚨 OpenAI CEO Sam Altman explains why he chose to work on AGI
" Working on AGI seems like the only choice for me and it began as a weak idea.
So my advice is to avoid negative people who prematurely dismiss ideas, as this can stop innovation "
Source | Artificial intelligence 🤖
🚨 A deepfake caller pretending to be a Ukrainian official almost tricked a US senator
🔹 Sen. Ben Cardin, the head of the Senate Foreign Relations Committee, recently encountered a sophisticated deepfake during a Zoom call with someone impersonating Dmytro Kuleba, Ukraine's former foreign minister. Cardin received an email that appeared to be from Kuleba, inviting him to discuss various issues. During the call, the individual convincingly mimicked Kuleba's appearance and voice but behaved oddly, prompting Cardin to become suspicious.
🔹 The impersonator engaged Cardin in politically charged discussions, including his stance on foreign policy matters such as the use of long-range missiles against Russia. Concerned about the nature of the conversation, Cardin reported the incident to the State Department, which later confirmed that he had spoken with an imposter rather than the real Kuleba. The identity of the impersonator remains unknown, highlighting the threat posed by deepfake technology.
🔹 In a statement to The New York Times, Cardin described the incident as a "deceptive attempt" by a "malign actor" to engage him in conversation. While he did not specify the identity of the known individual he was impersonated as, Senate security officials warned lawmakers to be vigilant for similar attempts in the future. Their communication emphasized the rising sophistication and believability of such social engineering threats.
🔹 As AI tools become more accessible, the use of deepfakes in politically motivated schemes has surged. The Senate security office noted the increasing frequency of such threats, exemplified by previous incidents involving deepfake impersonations of public figures. For instance, the FCC has pursued fines against political consultants for robocalls impersonating President Biden, while deepfake content has also been used to misrepresent Vice President Harris and former President Trump. These instances underscore the growing challenge of verifying the authenticity of communications in the digital age.
Source | Artificial intelligence 🤖
The German government-funded DLR Institute has conducted some incredible research in humanoid robot hardware and control systems since 2008.
In 2010, they designed a tendon-driven five-finger hand, and in 2016, demoed their robot drilling a hole in concrete using a 20-DoF hand.
Source | Artificial intelligence 🤖
Former Google CEO Eric Schmidt says energy demand for AI is infinite and we are never going to meet our climate goals anyway, so we may as well bet on building AI to solve the problem
Source | Artificial intelligence 🤖
Atom Limbs CEO Tyler Hayes says AI and data are where the prosthetic industry is going, and have the greatest runway for potential improvements, even beyond the robotics and neural interface aspects of prosthetics.
Source | Artificial intelligence 🤖
JAKA Robotics unveiled their full humanoid robot K-1 during the China International Industry Fair in Shanghai last week.
They brought a non-functioning prototype on stage and showed some simulation actions on screen.
Source | Artificial intelligence 🤖
🤳
Source | Artificial intelligence 🤖
Jensen Huang, Nvidia CEO, discusses the company’s partnership with Accenture to help scale AI adoption.
Source | Artificial intelligence 🤖
🖐
Source | Artificial intelligence 🤖
Best @aiart here, be rewarded💎 for your art!
Читать полностью…🚨 OpenAI Has Closed New Funding Round Raising Over $6.5 Billion
🔹 OpenAI has completed a deal to raise over $6.5 billion in new funding, giving the artificial intelligence company a more than $150 billion valuation, and bolstering its efforts to build the world's leading generative Al technology.
🔹 The deal is one of the largest-ever private investments, and makes OpenAl one of the three largest venture-backed startups, alongside Elon Musk's SpaceX and TikTok owner ByteDance Ltd., according to people familiar with the matter who asked not to be identified discussing private information. The size of the investment underscores the tech industry's belief in the power of Al, and its appetite for the extremely costly research powering its advancement.
🔹 The funding round was led by Thrive Capital, the venture capital firm headed up by Josh Kushner, Bloomberg previously reported, along with other global investors.
🔹 The massive funding round follows a turbulent year for OpenAI. Last November, the company's board fired and then quickly rehired its Chief Executive Officer Sam Altman. In the following months, the company has remade its board, hired hundreds of new employees and lost several key leaders, including Sutskever and Chief Technology Officer Mira Murati.
Source | Artificial intelligence 🤖
A dishwasher is not needed if a robot can clean plates and rack them to dry using less water and energy.
Two engineers achieved this with two gripper arms and just two hours of training data.
Source | Artificial intelligence 🤖
🚨 Google paid $2.7 billion to rehire Noam Shazeer
🔹Google has agreed to a significant deal worth $2.7 billion to rehire Noam Shazeer, an AI expert who previously left the company. This arrangement also includes licensing technology from Character. AI, a startup co-founded by Shazeer after his departure. Shazeer left Google in 2021 due to disagreements over the release of a chatbot he developed, which the company decided not to publish due to concerns about potential risks.
🔹Noam Shazeer was a key figure at Google, contributing to major AI advancements, including co-authoring the influential paper "Attention Is All You Need," which introduced the transformer architecture critical for modern natural language processing. His departure marked a significant loss for Google, as he had been instrumental in shaping the company's AI capabilities.
🔹After leaving Google, Shazeer and Daniel De Freitas, another former Google employee, co-founded Character. AI, which specializes in developing chatbots that can imitate various characters or personalities in conversation. The licensing deal with Google allows the tech giant to gain immediate access to Character. AI's technology, bypassing the lengthy regulatory approval process that would be required for a full acquisition.
🔹Upon his return to Google, Shazeer will lead the development of the next-generation AI model, Gemini, positioning it as a competitor to models like OpenAI's ChatGPT. This high-profile deal underscores the competitive nature of the AI industry, where expertise and innovative technology command substantial investments, highlighting the value of key individuals like Shazeer in driving advancements in AI.
Source | Artificial intelligence 🤖
Raspberry Pi and Sony made an AI-powered camera module
Source | Artificial intelligence 🤖
🚨 California's governor has vetoed SB 1047
🔹 A proposed law focused on AI safety that had sparked extensive debate over the past several months. The legislation aimed to mandate that developers of large AI models exercise "reasonable care" to prevent their technology from posing an "unreasonable risk" of significant harm, particularly in terms of cyberattacks causing at least $500 million in damages or resulting in mass casualties. Additionally, it required that AI systems be designed to allow for human intervention in the event of dangerous behavior.
🔹 The vetoed law faced strong opposition from major tech companies, including Google, Meta, Microsoft, and OpenAI, along with smaller enterprises. Critics argued that the law's vague language, particularly regarding what constituted an "unreasonable risk," created uncertainty that could hinder development and lead to legal challenges. This ambiguity raised concerns that companies might be unable to release their AI models without risking litigation or regulatory issues.
🔹 Furthermore, the law would not only impact California-based AI companies but also extend to any firms that operate within the state, effectively broadening its reach. This potential regulatory burden contributed to the apprehension among developers, who feared that the law could stifle innovation and operational flexibility in the rapidly evolving AI landscape.
Source | Artificial intelligence 🤖
🚨 ‘Robot lawyer’ company faces $193,000 fine as part of FTC’s AI crackdown
🔹 DoNotPay, a company that claimed to offer the "world's first robot lawyer," has reached a $193,000 settlement with the Federal Trade Commission (FTC) as part of Operation Al Comply, an initiative aimed at addressing companies that use AI services to deceive consumers. The FTC's complaint highlighted that DoNotPay made bold assertions about its AI's ability to replace human lawyers and generate legal documents, but did so without any supporting testing. The agency found that the company's technology had not been trained on relevant laws or tested for quality and accuracy in its legal offerings.
🔹 The FTC's investigation revealed that DoNotPay had misled consumers by suggesting they could use its AI to pursue legal actions, such as suing for assault, without the need for human lawyers. Additionally, the company claimed it could check for legal violations on small business websites merely based on a consumer's email address, promising significant savings on legal fees. However, the FTC determined that these claims were unfounded and that the service was ineffective.
🔹 As part of the settlement, DoNotPay agreed to pay the fine and to inform consumers who subscribed to its services between 2021 and 2023 about the limitations of its legal offerings. Furthermore, the company is prohibited from making claims about replacing professional services without evidence to substantiate such assertions. This case is part of a broader effort by the FTC to regulate AI-related practices that mislead customers.
🔹 In addition to the action against DoNotPay, the FTC is targeting other companies like Rytr, which allegedly provided tools for creating AI-generated fake reviews. The FTC's new regulation banning the creation or sale of fake reviews, including those generated by AI, will soon take effect, allowing the agency to impose significant fines for violations. The FTC's chair, Lina M. Khan, emphasized that using AI to mislead or defraud consumers is illegal and that the agency's enforcement actions aim to protect consumers while ensuring fair competition for honest businesses.
Source | Artificial intelligence 🤖
🚨 Update on California’s SB 1047 bill
🔹 California's SB 1047, a bill aimed at regulating AI, is currently awaiting the governor's decision on whether to sign or veto it. This situation raises broader concerns about how regulatory measures in the U.S. might affect its competitive stance against countries like China. Critics argue that if the U.S. slows its AI development for security reasons, it could fall behind China, which may not impose similar restrictions. There's skepticism about whether regulations in the U.S. could effectively align with China's governance under President Xi Jinping.
🔹 The Economist has provided insights into China's regulatory landscape regarding AI, highlighting that in 2023, China implemented regulations for chatbots and large language models (LLMs). These regulations include assessing algorithms for compliance with socialist norms, ensuring that their responses do not undermine the party line. Additionally, a registry for LLMs was established, requiring developers to register their technologies, reflecting a strong governmental control over AI development.
🔹 Notably, Andrew Chi-Chih Yao, the only Chinese scientist to win a Turing Award, has raised alarms about the existential threats posed by AI, stating they could surpass those of nuclear or biological weapons. His concerns resonate with other influential figures, including the former president of Baidu and members of the Chinese government's AI governance committee. In response to these warnings, Xi Jinping has publicly acknowledged the need for monitoring AI safety at the state level and has endorsed funding for research focused on AI alignment techniques.
🔹 China's strategy emphasizes proactive regulation over reactive measures, as outlined in a manual prepared for party officials that Xi reportedly edited. This document advocates for controlled AI growth to ensure security, reflecting a desire to prevent any AI systems from acting independently of the Communist Party's interests. The overarching sentiment is that the Chinese government is keenly aware of the implications of AI on national and global security, seeking to maintain control over its development while addressing the concerns of AI's potential threats.
Source | Artificial intelligence 🤖
🚨 Gemini is making Gmail’s smart replies smarter
🔹 Google is rolling out a Gemini-powered update to Gmail for Android and iOS that will tailor smart replies more specifically to emails. First announced back in May, Google says its new contextual Smart Replies will
"offer more detailed responses to fully capture the intent of your message" by taking the entire content of the email thread into consideration.
🔹 Users can hover over each of the suggested contextual smart replies to preview the text, and select the option that best matches their needs or writing style. Suggested replies can be edited or sent immediately. The idea is that this will both save time (especially if you're often buried in your Gmail inbox) and improve the variety of automated responses available beyond a simple "Yes, I'm working on it" or "No worries, thanks for the heads up!" — even adding an initial greeting and a signoff message.
🔹 The new contextual Smart Replies are now rolling out for Gemini Business, Enterprise, Education, Education Premium, and Google One Al Premium subscribers. The feature is currently only available in English and builds on the original Smart Replies added to Gmail in 2017.
Source | Artificial intelligence 🤖