亚洲欧美日韩精品综合在线观看,果冻传媒在线播放 免费观看,好男人资源在线观看高清播放,麻花天美星空果冻

當前位置:首頁 > 資訊 >

詐騙人工智慧「綁架」、價值 2 萬美元的機器人廚師、阿克曼的人工智慧抄襲戰爭:AI Eye

利用人工智慧製造假綁架案

本週,在一場離奇的網路綁架事件中,一名失蹤的 17 歲中國交換生在猶他州冰凍荒野的帳篷裡被發現還活著。

他被騙子操縱進入荒野,騙子聲稱綁架了他,向他的父母勒索了 8 萬美元的贖金。

Riverdale 警方正在處理網路綁架現象。

雖然尚不清楚這起事件中是否使用了人工智慧,但它揭示了假綁架事件不斷增加的趨勢,這些綁架事件通常針對中國交換生。

Riverdale 警察局表示,詐騙者經常透過威脅傷害家人來說服受害者孤立自己,然後使用恐懼策略和偽造的照片和音訊(有時是上演的,有時是由被綁架受害者的人工智慧產生的)來勒索金錢。亞利桑那州婦女 Jennifer DeStefano 去年,她向美國參議院作證稱,她被 Deepfake 人工智慧技術愚弄,以為自己 15 歲的女兒 Briana 被綁架了。

詐騙者顯然得知這名青少年正在滑雪旅行,然後用模仿布萊恩娜抽泣和哭泣的深度偽造人工智能聲音給詹妮弗打電話:媽媽,這些壞人抓住了我,幫助我,幫助我。

隨後,一名男子威脅說,如果不支付贖金,就要給布里安娜注射大量毒品並殺死她。

幸運的是,在她交出任何現金之前,另一位家長提到他們聽說過類似的人工智慧騙局,珍妮佛能夠聯繫到真正的布萊恩娜,得知她是安全的。

警方對她的報告不感興趣,稱其為惡作劇電話。

劍橋大學心理學教授桑德范德林登建議人們避免在網路上發布旅行計劃,並盡可能少地向來電者發送垃圾郵件,以阻止他們捕捉你的聲音。

如果您在網路上有大量音訊或視訊片段,您可能需要考慮將其刪除。

機器人技術的「ChatGPT 時刻」?

Figure Robot 的創始人 Brett Adcock 週末用小寫字母氣喘吁籲地發推文說,我們的實驗室剛剛取得了人工智能突破,機器人技術即將迎來它的 ChatGPT 時刻。這可能有點過分誇大了。

這項突破在一段一分鐘的影片中得以體現,影片展示了該公司的Figure-01機器人在接受人類10小時的指導後,自行製作咖啡。

製作咖啡並不是那麼具有開創性(當然也不是每個人都印象深刻),但影片聲稱機器人能夠從錯誤中學習並自我糾正。

因此,當圖 01 將咖啡包放錯時,它會足夠聰明地輕推它,使其進入插槽。

到目前為止,人工智慧在自我糾正錯誤方面表現相當糟糕。

Adcock said: The reason why this is so groundbreaking is if you can get human data for an application (making coffee, folding laundry, warehouse work, etc), you can then train an AI system end-to-end, on Figure-01 there is a path to scale to every use case and when the fleet expands, further data is collected from the robot fleet, re-trained, and the robot achieves even better performance.

Another robot hospitality video came out this week, showcasing Google DeepMind and Stanford Universitys Mobile Aloha a robot that can cook you dinner and then clean up afterward. Researchers claim it only took 50 demonstrations for the robot to understand some new tasks, showing footage of it cooking shrimp and chicken dishes, taking an elevator, opening and closing a cabinet, wiping up some wine and pushing in some chairs. Both the hardware and the machine learning algo are open sourced, and the system costs $20,000 from Trossen Robotics.

Introducing — Hardware!A low-cost, open-source, mobile manipulator.One of the most high-effort projects in my past 5yrs! Not possible without co-lead @zipengfu and @chelseabfinn.At the end, what's better than cooking yourself a meal with the pic.twitter.com/iNBIY1tkcB

— Tony Z. Zhao (@tonyzzhao) January 3, 2024

Full-scale AI plagiarism war

One of the weirder discoveries of the past few months is that Ivy League colleges in the U.S. care more about plagiarism than they do about genocide. This, in a roundabout way, is why billionaire Bill Ackman is now proposing using AI to conduct a plagiarism witch hunt across every university in the world.

Ackman was unsuccessful in his campaign to get Harvard President Claudine Gay fired over failing to condemn hypothetical calls for genocide, but the subsequent campaign to get her fired over plagiarism worked a treat. However, it blew back on his wife Neri Oxman, a former Massachusetts Institute of Technology professor, when Business Insider published claims her 300-page 2010 dissertation had some plagiarised paragraphs.

Read also Features The Road to Bitcoin Adoption is Paved with Whole Numbers Features Why Animism Gives Japanese Characters a NiFTy Head Start on the Blockchain

Ackman now wants to take everyone else down with Neri, starting with a plagiarism review of every academic, admin staff and board member at MIT.“Every faculty member knows that once their work is targeted by AI, they will be outed. No body of written work in academia can survive the power of AI searching for missing quotation marks, failures to paraphrase appropriately, and/or the failure to properly credit the work of others.

Last night, no one at @MIT had a good nights sleep.Yesterday evening, shortly after I posted that we were launching a plagiarism review of all current MIT faculty, President Kornbluth, members of MITs administration, and its board, I am sure that an audible collective gasp

— Bill Ackman (@BillAckman) January 7, 2024

Ackman then threatened to do the same at Harvard, Yale, Princeton, Stanford, Penn and Dartmouth and then surmised that sooner or later, every higher education institution in the world will need to conduct a preemptive AI review of its faculty to get ahead of any possible scandals.

Showing why Ackman is a billionaire and youre not, halfway through his 5,000-word screed, he realized theres money to be made by starting a company offering credible, third-party AI plagiarism reviews and added hed be interested in investing in one.

Enter convicted financial criminal Martin Shkreli. Better known as Pharma Bro for buying the license to Daraprim and then hiking the price by 5,455%, Shkreli now runs a medical LLM service called Dr Gupta. He replied to Ackman, saying: “Yeah I could do this easily,” noting his AI has already been trained on the 36 million papers contained in the PubMed database.

Probably better to do vector search tbh but some rag to go through it case by case. We actually already have all of pubmed downloaded so it's more or less a for loop once you feel good about your main tool.Can calibrate it on Gay's work

— Martin Shkreli (e/acc) (@wagieeacc) January 7, 2024

While online plagiarism detectors like Turnitin already exist, there are doubts about the accuracy and it would still be a mammoth undertaking to enter every article from every academic at even a single institution and cross-check the citations. However, AI agents could potentially systematically and affordably conduct such a review.Even if the global plagiarism witchhunt doesnt happen, it seems increasingly likely that in the next couple of years, any academic who has ever plagiarized something will get found out in the course of a job interview process or whenever they tweet a political position someone else doesnt like.

AI will similarly lower the cost and resource barriers to other fishing expeditions and make it feasible for tax departments to send AI agents to trawl through the blockchain for crypto transactions from 2014 that users failed to report and for offense archaeologists to use AI to comb through every tweet youve made since 2007 looking for inconsistencies or bad takes. Its a brave new world of AI-powered dirt digging.

Two perspectives on AI regulation

Professor Toby Walsh, the chief scientist at the University of New South Waless AI Institute, says heavy-handed approaches to AI regulation are not feasible. He says attempts to limit access to AI hardware like GPUs will not work as LLM compute requirements are falling (see our piece on a local LLM on an iPhone below). He also argues that banning the tech will be about as successful as the United States governments failed efforts to limit access to encryption software in the 1990s.

Read also Features Justin Aversano makes a quantum leap for NFT photography Features North Korean crypto hacking: Separating fact from fiction

Instead he called for vigorous enforcement of existing laws around product liability to hold AI companies to account for the actions of their LLMs. Walsh also called for a focus on competition by applying antitrust regulation more forcibly to lever power away from the Big Tech monopolies, and he called for more investment by governments in AI research.Meanwhile, venture capital firm Andreessen Horowitz has gone hard on the competition is good for AI theme in a letter sent to the United Kingdom’s House of Lords. It says that large AI companies and startups should be allowed to build AI as fast and aggressively as they can and that open-source AI should also be allowed to freely proliferate to compete with both.

A16z’s letter to the House of Lords. (X) All killer, no filler AI news

OpenAI has published a response to The New York Times copyright lawsuit. It claims training on NYT articles is covered by fair use, regurgitation is a rare bug, and despite the NYT case having no merit, OpenAI wants to come to an agreement anyway.

The New Year began with controversy about why ChatGPT is delighted to provide Jewish jokes and Christian jokes but refuses point blank to make Muslim jokes. Someone eventually got a halal-rious pun from ChatGPT, which showed why its best not to ask ChatGPT to make any jokes about anything.

In a development potentially worse than fake kidnappings, AI robocall services have been released that can tie you up in fake spam conversations for hours. Someone needs to develop an AI answering machine to screen these calls.

Introducing Bland Turbo. The world's fastest conversational AI:– Send or receive up to 500,000+ phone calls simultaneously. – Responds at human level speed. In anyone's voice. – Program it to do anything.Call and talk to Bland Turbo now: https://t.co/qm0sUggov3 (1/3) pic.twitter.com/5e6Y3FU9Wh

— Bland.ai (@usebland) January 5, 2024

A blogger with a mixed record claims to have insider info that a big Siri AI upgrade will be announced at Apples 2024 Worldwide Developer Conference. Siri will use the Ajax LLM, resulting in more natural conversations. It will also link to various external services. But who needs Siri AI when you can now download a $1.99 app from the App Store that runs the open-source Mistral 7B 0.2 LLM locally on your iPhone?

Around 170 of the 5,000 submissions to an Australian senate inquiry on legalizing cannabis were found to be AI-generated.

More than half (56%) of 800 CEOs surveyed believe AI will entirely or partially replace their roles. Most also believe that more than half of entry-level knowledge worker jobs will be replaced by AI and that nearly half the skills in the workforce today wont be relevant in 2025.

Subscribe The most engaging reads in blockchain. Delivered once a week.

Email address

SUBSCRIBE

猜你喜歡

微信二維碼

微信二維碼