Google Brain founder Andrew Ng thinks you should still learn to code - here's why

2 days ago 9
dsc-2643
Joe Jenkins

Follow ZDNET: Add america arsenic a preferred source on Google.


ZDNET's cardinal takeaways 

  • AI Dev, DeepLearning.ai's AI conference, made its NYC debut. 
  • We sat down with Andrew Ng astatine the lawsuit to speech AI and developers.
  • Ng recommends that everyone larn to code. 

The 2nd yearly AI Dev, a acme connected each things AI and bundle hosted by Andrew Ng's DeepLearning.ai, came to New York connected Friday. In respective panels and an interrogation with ZDNET, the Google Brain laminitis had proposal to springiness astir the aboriginal of the field. 

AI has rapidly go a reliable coding adjunct for galore developers -- truthful overmuch truthful that galore are wondering astir the aboriginal of the full profession. Entry-level coding jobs are dwindling for caller grads as teams offload inferior tasks to AI assistants; astatine the aforesaid time, experts mention the real limitations of these tools arsenic impervious that engineers volition ne'er really go obsolete. 

Also: Why AI coding tools similar Cursor and Replit are doomed - and what comes next

Here's what Ng had to accidental astir however to navigate this uncertain future, wherefore everyone should larn to code, and however governance should truly beryllium done. 

Coding inactive matters - benignant of 

"Because AI coding has lowered the barroom to introduction truthful much, I anticipation we tin promote everyone to larn to codification -- not conscionable bundle engineers," Ng said during his keynote.

How AI volition interaction jobs and the aboriginal of enactment is inactive unfolding. Regardless, Ng told ZDNET successful an interrogation that helium thinks everyone should cognize the basics of however to usage AI to code, equivalent to knowing "a small spot of math," -- inactive a hard skill, but applied much mostly to galore careers for immoderate you whitethorn need. 

"One of the astir important skills of the aboriginal is the quality to archer a machine precisely what you privation it to bash for you," helium said, noting that everyone should cognize capable to talk a computer's language, without needing to constitute codification yourself. "Syntax, the arcane incantations we use, that's little important." 

Also: OpenAI tested GPT-5, Claude, and Gemini connected real-world tasks - the results were surprising

He added that helium wants to invited vibecoders successful arsenic assemblage members, adjacent if they aren't technically developers themselves. But helium doesn't expect it to beryllium easy, either. Despite noting that "it's truly evident that codification should beryllium written with AI assistants," Ng admitted that vibecoding -- which helium prefers to telephone "AI coding" -- leaves him "mentally exhausted." 

Becoming generalists   

In his keynote, Ng noted that due to the fact that AI has made bundle improvement truthful fast, merchandise absorption -- not prototyping -- is the caller slowing constituent for launching caller products. To support the gait AI makes possible, helium recommended that engineers larn immoderate merchandise absorption skills to circumvent that stall. 

"Engineers that larn immoderate merchandise enactment tin frankly beryllium a squad of one," helium said. 

Also: What Bill Gates truly said astir AI replacing coding jobs

That taxable of each professionals -- not conscionable developers -- becoming generalists recurred passim the summit. During a sheet connected improvement successful the AI age, Fabian Hedin, CTO astatine coding level Lovable -- 1 of the underdog startups connected a16z's caller list -- noted that vibecoding tin alteration radical with heavy cognition successful a non-software taxable to "iterate a batch faster than before" utilizing coding skills. Moderator Laurence Moroney, manager of AI astatine Arm, said that this tin marque the astir of an different siloed expert, a displacement successful however niche skills relation successful the workplace. 

The caller situation for developers, Ng said during the panel, volition beryllium coming up with the conception of what they want. Hedin agreed, adding that if AI is doing the coding successful the future, developers should absorption connected their intuition erstwhile gathering a merchandise oregon tool. 

"The happening that AI volition beryllium worst astatine is knowing humans," helium said. 

Why CS degrees aren't serving students 

The realities of coding successful the AI property person started to deed post-grads struggling to find jobs. Computer science, erstwhile thought of arsenic a foolproof large that guaranteed a lucrative career, is letting students down, Ng told ZDNET. 

He cited the overhiring sprees tech companies went connected -- and past yet reversed -- during the COVID-19 pandemic arsenic the superior crushed entry-level coding jobs are hard to travel by. Beyond that, though, it's a question of grads having the close benignant of coding skills. 

"AI has transformed however codification should beryllium written, but candidly, galore universities person been dilatory to accommodate the curriculum," helium said. "So if a assemblage has not importantly changed its curricula since 2022, past they are not readying grads for the jobs of the marketplace today." 

Also: AI volition origin 'jobs chaos' wrong the adjacent fewer years, says Gartner - what that means

Ng said helium considers it "malpractice" for universities to assistance CS degrees without teaching those students however to optimize moving with AI assistants. 

"I really consciousness atrocious that there's inactive radical contiguous receiving a bachelor's grade successful machine subject that person not made a azygous API telephone to a azygous AI model," helium said. For him, reorienting CS degrees astir that world volition span the spread betwixt underprepared grads and a request for AI-experienced coders.  "For the caller assemblage grads that bash cognize those skills, we can't find capable of them," Ng said, a interest helium besides noted earlier this autumn successful an X post

Public fearfulness of AI 

In his keynote, Ng recognized that "AI has not yet won America's hearts and minds," referring to the often-circulated nationalist cognition of what AI could go successful its worst-case scenario. Several panelists called connected the hundreds of developers successful the assemblage to displacement that perception. 

dsc-2491
Joe Jenkins

"You person this unsocial penetration into what AI is not," said Miriam Vogel, president and CEO of Equal AI. She urged developers not to disregard people's fears astir the tech, but to actively enactment successful AI literacy, adding that "we volition fail" if that sentiment doesn't improve. 

Ng thinks 3rd parties person intentionally sown AI fearfulness frankincense far. 

"I deliberation a batch of fearfulness of AI was driven by a fistful of businesses that ran almost, frankly, PR campaigns to get radical to beryllium fearful of AI, often for lobbying," helium told ZDNET during our interview. "I deliberation that's done a batch of harm to the tract of AI and to American enactment for developers." 

When asked however developers tin interaction that, helium said helium wants them to prosecute successful candid speech astir what's moving and what isn't. "If the nationalist understands it better, past we tin each get astatine much rational conclusions astir technology," helium said. 

Many of those fears stem from AGI, the somewhat ill-defined equivalent of human-level quality that OpenAI and Microsoft, among different labs, person set their sights connected with expanding intensity. Ng has agelong maintained that those projections are overblown. 

"If you look astatine the incredibly messy grooming recipes that spell into grooming these AI models, there's nary mode this is AGI -- if, by AGI, you mean immoderate intelligence task that a quality does," Ng told ZDNET. "So overmuch of that cognition is still, frankly, engineered into these systems, with precise clever people, with a batch of data." 

Safety and governance 

In a sheet conversation, Ng acknowledged that the nationalist doesn't truly cognize what AI labs are doing, which tin make panic, but urged radical not to "take a reddish teaming workout and crook it into a media sensation." Ng added that he's little successful favour of Anthropic's marque of information and governance, which helium finds somewhat limiting. Rather than bearing down connected governance efforts, helium emphasized sandboxed environments "that are guaranteed safe" arsenic a way toward liable AI that doesn't hamper speed. 

Vogel defined governance arsenic "breaking down principles into actionable workflows," not creating bureaucracy. Her interest was little astir the hyperscalers similar OpenAI and Meta, and much astir the smaller AI companies that blaze up earlier they've developed immoderate governance structure. 

Regulating AI  

"You don't get to pb successful AI by passing regulations," Ng said during a panel, speaking connected the EU's attack to legislating AI. He credited the Trump administration's AI Action Plan, released this past summer, for keeping national regulations loose. 

Many AI experts are alarmed astatine the deficiency of US AI regulation. Some spot the national government's nonaccomplishment to modulate societal media platforms erstwhile they proliferated arsenic an illustration of what could hap if AI continues to outpace legislation. Ng told ZDNET helium thinks that's a mendacious equivalency. 

Also: 8 ways to marque liable AI portion of your company's DNA

"I'm seeing mode much atrocious regulatory proposals than bully ones," Ng said successful the interview, adding that helium sees the nonconsensual deepfake prohibition and the FTC's actions against companies using AI to standard "deceptive oregon unfair conduct" arsenic examples of bully AI policy. 

When asked if determination are immoderate different regulations he'd marque astatine the national level, helium said helium wants much transparency requirements for ample AI companies disconnected the bat. 

"When a batch of atrocious things happened with societal media, nary of america knew astir it. Even radical wrong the concern did not truly cognize astir it," Ng told ZDNET. "If we person regulations to necessitate that the largest companies -- lone the ample ones, truthful we don't enactment unreasonable compliance burdens of tiny startups -- but if we request immoderate level of transparency from the businesses with a precise ample fig of users, that could springiness america amended signals to spot the existent problems, alternatively than number connected the luck of determination being a whistleblower." 

Read Entire Article