Because the AI works so well, or because it doesn't?
> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.
That's kinda wild. I'm kinda shocked they put it in writing.
I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.
My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).
Big tech is suffering from the incumbents disease.
What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.
Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.
Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.
Once you have a golden goose, the risk taking innovators who built the thing are replaced by risk averse managers who protect it. Not killing the golden goose becomes priority 1, 2, and 3.
I think this is the steel man of “founder mode” conversation that people were obsessed with a year ago. People obsessed with “process” who are happy if nothing is accomplished because at least no policy was violated, ignoring the fact that policies were written by humans to serve the company’s goals.
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". ... I don't spend as much time looking to "align with stakeholders"...
Isn't that "move fast and break things" by another name?
Sometimes moving fast in a large org boils down to finding a succinct way to tell the lawyer "I understand what you're saying, but that's not consistent with my understanding of the legality of the issue, so I will proceed with my work. If you want to block my process, the escalation path is through my manager."
(I have more than once had to explain to a lawyer that their understanding was wrong, and they were imposing unnecessary extra practice)
Well, let's give a concrete example. I want to use an SaaS as part of my job. My manager knows this and supports it. In the process of me trying to sign up for the SaaS, I have to contact various groups in the company- the cost center folks to get an approval for spending the money to get the SaaS, the security folk to ensure we're not accidentally leaking IP to the outside world, the legal folks to make sure the contract negotiations go smoothly.
Why would the lawyer need to talk to my manager? I'm the person getting the job done, my manager is there to support me and to resolve conflicts in case of escalations. In the meantime, I'm going to explain patiently to the lawyer that the terms they are insisting on aren't necessary (I always listen carefully to what the lawyer says).
> pointed at internal gatekeepers who mainly seem to say no to product releases.
I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.
When I worked there (7 years), the gatekeeper effect was real. It didn’t stop broken or harmful, but it did stop revenue neutral or revenue negative. Even if we had proven the product was positive to user wellbeing or brand-favorability.
We learned not to publish as much information about contracts and to have huge networks of third party data sharing so that any actually concerning ones get buried in noise.
Yes, I was SRE at Google (Ads) for several years and that influences my work today. SRE was the first time I was on an ops team that actually was completely empowered to push back against intrusive external changes.
The big events that shatter everything to smithereens aren't that common or really dangerous: most of the time you can lose something, revert and move on from such an event.
The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.
Makes sense. It's easier to be right by saying no, but this mindset costs great opportunities. People who are interested in their own career management can't innovate.
You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.
"You can't innovate without taking career-ending risks."
Its not the job of employees to bear this burden - if you have visionary leadership at the helm, they should be the ones absorbing this pressure. And thats what is missing.
The reality is folks like Zuck were never visionaries. Lets not derail the thread but a) he stole the idea for facebook b) the continued success of Meta comes from its numerous acquisitions and copying its competitors, and not from organic product innovation. Zuckerberg and Musk share a lot more in common than both would like to admit.
... until reality catches up with a software engineer's inability to see outside of the narrow engineering field of view, neglecting most things that the end-users will care about, millions if not billions are wasted and leadership sees that checks and balances for the engineering team might be warranted after all because while velocity was there, you now have an overengineered product nobody wants to pay for.
You’re on the mark - this is the real challenge in software development. Not building software, but building software that actually accomplished the business objective. Unless of course you’re just coding for other reasons besides profit.
This is, IMO, a leadership-level problem. You'll always (hopefully) have an engineering manager or staff-level engineer capable of keeping the dev team in check.
I say it's a leadership problem because "partnering with X", "getting Y to market first", and "Z fits our current... strategy" seem to take precedence over what customers really ask for and what engineering is suggesting actually works.
One of the eternal struggles of BigCo is there are structural incentives to make organizations big and slow. This is basically a bureaucratic law of nature.
It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.
Pournelle's law of bureaucracy. Any sufficiently large organization will have two kinds of people: those devoted to the org's goals, and those devoted to the bureaucracy itself, and if you don't stop it the second group will take control to the point that bureaucracy itself becomes the goal secondary to all others.
Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.
The bureaucracy crew will win, they are playing the real game, everybody else is wasting effort on doing things like engineering.
The process is inevitable, but whatever. It is just part of our society, companies age and die. Sometimes they course correct temporarily but nothing is permanent.
The you in that example is the Org, or the person leading it. I find that what usually happens is the executive in charge of it all either wises up to the situation or, more commonly, gets replaced by someone with fresh eyes. In any case, it often takes months and years to get to a point of bureaucratic bloat but the corrections can be swift.
I also think on this topic specifically there is so much labor going into low/no ROI projects and it's becoming obvious. That's just like my opinion though, should Meta even be inventing AI or just leveraging other AI products? I think that's likely an open question in their Org - this may be a hint to their latest thoughts on it.
> By reducing the size of our team, fewer conversations will be required to make a decision
This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.
The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.
A solution to that scaling problem is to have most of the n not actually doing anything. Sitting there and getting paid but adding no value or overhead.
"Load bearing." Isn't this the same guy that sold his company for $14B. I hope his "impact and scope" are quantifiably and equivalently "load bearing" or is this a way to sacrifice some of his privileged former colleagues at the Zuck altar.
Seems like a purge - new management comes in, and purges anyone not loyal to it. standard playbook. Happens in every org. Instead of euphemisms like "load-bearing" they could have straight out called it eliminating the old-guard.
Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.
I just assume they over hired. Too much hype for AI. Everyone wants to build the framework people use for AI nobody wants to build the actual tools that make AI useful.
They’ve done this before with their metaverse stuff. You hire a bunch, don’t see progress, let go of people in projects you want to shut down and then hire people in projects you want to try out.
Why not just move people around you may ask?
Possibly: different skill requirements
More likely: people in charge change, and they usually want “their people” around
Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money
> More likely: people in charge change, and they usually want “their people” around
Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.
If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?
Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!
Can you rehire that quickly though? I know where I live the government won't allow you to rehire people you just fired. Because the severance benefits have lower tax requirements and if you could do that you could do it every year as a form of tax evasion.
VR + AI could actually be kinda fun (I’m sure folks are working on this stuff already!). Solve the problems of not enough VR content and VR content creation tools kind of sucking by having AI fill in the gaps.
But it is just a little toy, Facebook is looking for their next billion dollar idea; that’s not it.
Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense. It's hard to blame the average developer for not enduring the hard things when nobody involved seems truly concerned with the value proposition of any of this.
This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.
> Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense.
“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs
This is true, but sadly the customer isn’t always the user and thus nonsensical products (now powered by AI!) continue to sell instead of being displaced quickly by something better.
"You've got to start with the investors, make them panic over AI woo and hand you all their capital, and only then do you figure out if this crap is actually profitable" - Jov Stebes
I haven’t even thought of Meta as a competitor when it comes to AI. I’m a semi-pro user and all I think of when I think of AI is OpenAI, Claude, Gemini, and DeepSeek/Qwen, plus all the image/video models (Flux, Seedance, Veo, Sora)
My voice activated egg timer is amazing. There are millions of useful small tools that can be built to assist us in a day-to-day manner... I remain skeptical that anyone will come up with a miracle tool that can wholesale replace large sections of the labor market and I think that too much money is chasing after huge solutions where many small products will provide the majority of the gains we're going to get from this bubble.
For any given time period N, if it takes > 0 time or effort to make a tool, then there are provably less possible tools than infinity for sure.
If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?
There is a real question of if a more productive developer with AI is actually what the market wants right now. It may actually want something else entirely, and that is people that can innovate with AI. Just about everyone can be "better" with AI, so I'm not sure if this is actually an advantage (the baselines just got lifted for all).
Sounds to me like the classic everywhere communications problems: 1) people don't listen, 2) people can't explain in general terms, 3) while 2 is taking place, so is 1, and as that triggers repeat after repeat, people frustrate and give up.
Maybe I’m not understanding, but why is that wild? Is it just the fact that those people lost jobs? If it were a justification for a re-org I wouldn’t find it objectionable at all
It damages trust. Layoffs are nearly always bad for a company, but are terrible in a research environment. You want people who will geek out over math/code all day, and being afraid for your job (for reasons outside your control!) is very counterproductive. This is why tenure was invented.
Perhaps I'm being uncharitable but this line "each person will be more load-bearing" reads to me as "each person will be expected to do more work for the same pay".
To me, it's the opposite. I think the words used are not exactly well-thought-through, but what they seem to want to be saying is they want less bureaucratic overhead, smaller teams responsible for bigger projects and impact.
And wanting that is not automatically a bad thing. The fallacy of linearly scaling man-hour-output applies in both directions, otherwise it's illogical. We can't make fun of claims that 100 people can produce a product 10 times as fast as 10 people, but then turn around and automatically assume that layoffs lead to overburdened employees if the scope doesn't change, because now they'll have to do 10 times as much work.
Now they can, often in practice. But for that claim to hold more evidence is needed about the specifics of who is laid off and what projects have been culled, which we certainly don't seem to have here.
Layoffs are everywhere. Millions of employees have had to do more without any change in compensation. My own team has decreased from six to two, but I am not seeing any increased pay for being more load bearing.
I will always pour one out for the fellow wage slave (more for the people who suddenly lost a job), but I am admittedly a bit less sympathetic to those with in demand skills receiving top tier compensation. More for the teachers, nurses, DOGEd FDA employees, whatever who was only ever taking in a more modest wage, but is continually expected to do more with less.
Management cutting headcount and making the drones work harder is not a unique story to Facebook.
Still, regardless of the eye-watering amount of money, there's still a maximum amount of useful work you can get out of someone. Demand too much, and you actually lower their total productivity.
(For me, I found the limit was somewhere around 70 hrs/week - beyond that, the mistakes I made negated any progress I made. This also left me pretty burnt out after about a year, so the sustainable long-term hourly work rate is lower)
We're talking about overworked AI engineers and researchers who've been berated for management failures and told they need to do 5x more (before today). The money isn't just handed out for slacking, it's in exchange for an eye-watering amount of work, and now more is expected of them.
Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.
Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.
Having worked at Meta, I wish they did this when I was there. Way too many people not agreeing on anything and having wildly different visions for the same thing. As an IC below L6 it became really impossible to know what to do in the org I was in. I had to leave.
They could do like in the Manhattan project: have different team competing on similar products. Apparently Meta is willing to throw away money, could be better than giving the talents to their competitors.
They properly fucked FAIR. it was a lead, if not the leading AI lab.
then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.
Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.
Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
The thing that many, so called smart people, dont realise is that leadership and vision are incredibly scarce traits.
Pure technologists and MBA folks dont have a visionary bone in their body. I always find the Steve Jobs criticism re. his technical contributions hilarious. That wasnt his job. Its much easier to execute on the technical stuff, when theres someone there who is leading the charge on the vision.
> "By cutting staff, we can save valuable budget for heating and paperclips. Anyway can I sell you some shares. Superintelligence is just around the corner!"
I can actually relate to that, especially in a big co where you hire fast. I think it's shitty to over-hire and lay off, but I've definitely worked in many teams where there were just too many people (many very smart) with their own sense of priorities and goals, and it makes it hard to anything done. This is especially true when you over-divide areas of responsiblity.
I mean, I guess it makes sense if they had a particularly Byzantine decision-making structure and all those people were in roles that amounted to bureaucracy in that structure and not actually “doers”.
What's wild about this? They're saying that they're streamlining the org by reducing decision-makers so that everything isn't design-by-committee. Seems perfectly reasonable, and a common failure mode for large orgs.
Anecdotally, this is a problem at Meta as described by my friends there.
Maybe they shouldn't have hired and put so many cooks in the kitchen. Treating workers like pawns is wild and you should not be normalizing the idea that it's OK for Big Tech to hire up thousands, find out they don't need them, and lay them off to be replaced by the next batch of thousands by the next leader trying to build an empire within the company. Treating this as SOP is a disservice to your industry and everyone working in it who isn't a fat cat.
No, I'm totally fine with it. No one can guess precisely how many people need to be hired and I'd rather they overshoot than undershoot because some law stops it. This means that now some people were employed who would not otherwise be employed. That's spending by Meta that goes to people.
I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.
(I can see why someone might think that’s a charitable interpretation)
I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
>I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
> while the company continues to hire workers for its newly formed superintelligence team, TBD Lab.
It's coming any day now!
> "... each person will be more load-bearing and have more scope and impact,” Wang writes
It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:
> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.
I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.
The bastards are playing both sides! Employees are expected to be So Enamored that we act like we have an ownership stake. Imagine the type of relationships that working ~18 hour days 6 times a week might offer! Generative Porn would be a welcome escape, probably.
Charging me for stuff I am not using is why I will sooner rather than later leave google. It's ridiculous how they tack on this non-feature and then charge you as if you're using it.
For ChatGPT I have a lower bar because it is easier to avoid.
Hardly, they are burning money with TikSlop, they don't even know how to monetize it, just YOLO'd the product to keep investors interested.
Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.
Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.
I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.
I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?
I just don't understand how smart people think this is going to work out at all.
> I just don't understand how smart people think this is going to work out at all.
The previous couple of crops of smart people grew up in a world that could still easily be improved, and they set about doing just that. The current crop of smart people grew up in a world with a very large number of people and they want a bigger slice of it. There are only a couple of solutions to that and it's pretty clear to me which way they've picked.
They don't need to 'keep the economy running' for that much longer to get their way.
> I just don't understand how smart people think this is going to work out at all.
Thats the thing, they arent looking at the big picture or long term. They are looking to get a slice of the pie after seeing companies like Tesla and Uber milk the market for billions. In a market where everything from shelter to food is blowing up in cost, people struggle to provide/have a life similar to their parents.
“Many men of course became extremely rich, but this was perfectly natural and nothing to be ashamed of because no one was really poor – at least no one worth speaking of.”
I will accept the Chief Emergency Shutoff Activator Officer role; my required base comp is $25M. But believe me, nobody can trip over cables or run multiple microwaves simultaneously like I can.
> ”By reducing the size of our team, fewer conversations will be required to make a decision,..."
I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?
Guaranteed this is them cleaning out the old guard, its either axe them, or watch a brutal political game between legacy employees and new LLM AI talent
While ex-FAIR people should have little problem finding a job, the market for paying research folks that level of TC and working on ambitious research projects, unless you're in a very-LLM specific space, is absolutely shrinking.
It certainly feels like the end of an era to see Meta increasingly diminishing the role of FAIR. Strategically it might not have been ideal for LeCun to be so openly and aggressively critical of the this current generation of AI (even if history will very likely prove him correct).
Lots of companies spun up giant AI teams over the last 48 months. I wouldn’t be surprised at all if 50+% of these roles are eliminated in the next 48 months.
The AI party is coming to an end. Those without clear ROI are ripe for the chopping block.
Tbh yes. I like AI but I'm getting a bit sick of the hype. All our top dogs want AI in everything no matter whether it actually benefits the product. They even know it's senseless but they need to show the shareholders that they are all-in on AI.
It's really time for this bubble to collapse so we can go back to working on things that actually make sense rather than ticking boxes.
Meta is fumbling hard. Winning the AI race is about marketing at this point - the difference between the models is negligible.
Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).
>Winning the AI race is about marketing at this point - the difference between the models is negligible.
Meta is paying Anthropic to give its devs access to Claude because it's that much better than their internal models. You think that's a marketing problem?
They are shoving it down, WhatsApp has two entry points on the main view. I've received multiple requests for tips on how to hide them, I don't think people are interested. And I'd hide them too if I just could.
Surely winning the AI race is finding secret techniques that allow development of superior models, with it not being apparent that anyone has anything special enough that he actually is winning?
I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.
The biggest things were those six months before people figured out how O1 worked and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.
I think that depends on how optimistic/pessimistic one is on how much more superior the models are going to get. If you're really pessimistic then there isn't all too much one company could do to be 2x or more ahead already. If you're really optimistic then it doesn't matter what anyone is doing today because it's about who finds the next 100x leap.
The models have increased greatly in capabilities, but the competitors have simply kept up, and it's not apparently that they won't continue to do that. Furthermore, the breakthroughs-- i.e. fundamentally better models, can happen anywhere where people can and do try out new architectures, and that can happen in surprisingly small places.
It's mostly about culture and being willing to experiment on something which is often very thankless since most radical ideas do not give an improvement.
Winning the AI race is winning the application war. Similar to how internet, OS has been there for a long time, but the ecosystem took years to build.
But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.
But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?
Meta is the worst at building platforms out of the big players. If you're not building to Facebook or Metaverse, what would you be building for if you were all-in on Meta AI? Instagram + AI will be significant, but not Meta-level significant, and it's closed. Facebook is a monster but no one's building to it, and even Mark knows it is tomorrow's Yahoo.
Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.
> Winning the AI race is winning the application war
This. 100% This.
As an early stage VC, the foundational model story is largely over, and understanding how to apply models to applications or how to protect applications leveraging models is the name of the game now.
> Maybe adversarial agents will see improvements...
There is increased appetite now to invest in those models that are taking a reasoning and RL problem.
I mostly agree with this but make an exception for MetaAI which seems egregiously bad compared to the others I use regularly (Anthropic's, Google's, OpenAI's)
Every time I see news like this, I just try to focus more on working on things I think are meaningful and contributing positively to the world... there is so much out of our control but what is in our control is how we use our minds and what we believe in.
If this impacted you - we are hiring at Magnetic (AI doc scanning and workflow automation for CPA firms). Cool technical problems, building a senior, co-located team in SF to have fun and build a great product from scratch
I’m kind of surprised Wang is leading AI at Meta? His knowledge is around data labeling which is important sure but is he really the guy to take this to the next level?
A skim of his Wikipedia bio suggests that he's smart, but mostly just interested in making money for himself. Since high school, he's spent time at: some fintech place, a Q-and-A site, MIT briefly, another fintech place, then data labeling and defense contracting. He sounds like a cash-seeking missile to me.
It was never true, unless you're a top 100 in the world AI researcher. 99% of AI investment is in infrastructure (GPUs, data centers, etc). The goal is to eliminate labor, whether AI-skilled or not.
Taking a guess here but, I think what they're saying is, if most investors have gone all-in on AI, and the bubble pops, who will be investing in the next big thing? What investors will still have money to invest?
my take:
Meta’s leadership and dysfunctional culture failed to nurture talent. To fix that, they started throwing billions of $ at hiring from outside desperately.
And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.
I think this is because older AI doesn't get done what LLM AI does. Older AI = normal trained models, neural networks (without transformers), support vector machines, etc. For that reason, they are letting them go. They don't see revenue coming from that. They don't see new product lines (like AI Generative image/video). AI may have this every 5 years. A break through moves the technology into an entirely new area. Then older teams have to re-train, or have a harder time.
I would expect nearly every active AI engineer who trained models in the pre-LLM era to be up to speed on the transformer-based papers and techniques. Most people don't study AI and then decide "I don't like learning" when the biggest AI breakthroughs and ridiculous pay packages all start happening.
This seems like the most likely explanation. Legacy AI out in favour of LLM focused AI. Also perhaps some cleaning out of the old guard and middle management while they're at it.
There always has been a stunning amount of inertia from the old big data/ML/"AI" guard towards actually deploying anything more sophisticated than linear regression.
I really doubt that. Most of the profit-generating AI in most industries... decision support, spotting connections, recommendations, filtering, etc... runs on old school techniques. They're cheaper to train, cheaper to run, and more explainable.
Last survey I saw said regression was still the most-used technique with SVM's more used than LLM's. I figured combining those types of tools with LLM tech, esp for specifying or training them, is a better investment than replacing them. There's people doing that.
Now, I could see Facebook itself thinking LLM's are the most important if they're writing all the code, tests, diagnostics, doing moderation, customer service, etc. Essentially, running the operational side of what generates revenue. They're also willing to spend a lot of money to make that good enough for their use case.
That said, their financial bets make me wonder if they're driven by imagination more than hard analyses.
- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.
- Google's mission is to organise the world's information and make it universally accessible and useful.
- Meta's mission is to build the future of human connection and the technology that makes it possible.
Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.
EDIT: everyone commenting that mission statements are PR fluff. Fine. What is a productive way they can use LLMs in any of their flagship products today?
> What is a productive way they can use LLMs in any of their flagship products today?
It's kind of the other way around, isn't it? Meta has the posts of a billion users with which to train LLMs, so they're in a better position to make them than most others. As for what to do with it then, isn't that that pretty similar no matter who you are?
On top of that, sites are having problems with people going to LLMs instead of going to the site, e.g. why ask a question on Facebook to get an answer tomorrow if ChatGPT can tell you right now? So they either need to get in on the new thing which is threatening to eat their lunch or they need to commoditize it sufficiently that there isn't a major incumbent competitor posed to sit between the users and themselves extracting a margin from the users, or worse, from themselves for directing user traffic their way instead of to whoever outbids them.
Meta's mission is to build the future of human connection -- this totally makes sense if you assume they believe that the future of human connection is with an AI friend.
That https://character.ai is so enormously popular with people who are under the age of 25 suggests that this is the future. And Meta is certainly looking at https://character.ai with great interest, but also with concern. https://character.ai represents a threat to Meta.
Years ago, when Meta felt that Instagram was a threat, they bought Instagram.
If they don't think they can buy https://character.ai then they need to develop their own version of it.
Character.ai over raised, the leadership team left, and there's no appreciable revenue AFAIK and have heard. Kids under 25 role playing with cartoons are hard to monetize.
Then there's also the reputational harm if Meta acquires them and the journalists write about the bad things that happen on that platform.
This is too reductionist. When you go to work, do you go maximize shareholder value? Were you ever part of a team and felt good about the work you were doing together? Was it because you were maximizing shareholder value?
> When you go to work, do you go maximize shareholder value?
Yes. The further up the ladder you go, the more this is pounded into your head. I was in a few Big Tech and this is how you write your self-assessment. "Increased $$$ revenue due to higher user engagement, shipped xxx product that generated xxx sales etc".
If you're level 1/2 engineer, sure. You get sold on the company mission. But once you're in senior level, you are exposed to how the product/features will maximize the company's financial and market position. How each engineer's hours are directly benefiting the company.
> Were you ever part of a team and felt good about the work you were doing together?
Maybe some startups or non-profits can have this (like Wikipedia or Craigslist), but definitely not OpenAI, Google and Meta.
Most of the work I as an engineer do is jumping through hoops that engineers from other departments have drawn up. If someone up high really cared, wouldn't they have us work on something that matters?
You are looking at it wrong. Meta is a business. You know what they sell? Ads.
In fact, they are the #1 or #2 place in the world to sell an ad depending on who you ask. If the future turns out to be LLM-driven, all that ad-money is going to go to OpenAI or worse to Google; leaving Zuck with no revenue.
So why are they after AI? Because they are in the business of selling eyeballs placement and LLM becoming the defacto platform would eat into their margins.
Re: "What is a productive way they can use LLMs in any of their flagship products today" - with LLMs users would not interact with other users and also users would not leave the platform.
Meta's actual mission is to keep people on the platform and to do what can be done so users do not leave the platform. I found out that from this perspective Meta's actions make more sense.
I usually try not to be so cynical but just couldn't resist here: What if the future of human connection is to replace it with para social relationships that can be monetized?
That said I am not cynical about mission statements like that per se, I do think that making large organizations work towards a common goal is a very difficult problem. Unless you're going to have a hierarchical command and control system in place, you need to do it through shared culture and mission.
Targeting ads better. Better sentiment analysis. Custom ads written for each user based on their preferences. Features for their AR glasses. Probably try to take a piece of the Google search pie. Use this AI search to serve ads.
Ads are their product mostly, though they are also trying to get into consumer hardware.
If your friends are human then you could collectively decide to leave for another platform, that's not very cash money for Meta. They want to go past you being on Facebook cause all your friends are there, they want you to be friends with the platform itself.
Side note, has black mirror done this yet or are they still stuck on "what if you are the computer" for the 34th time?
The nouveau riche find out this is definitely not at all true the hard way. Easy come, easy go. If your children remain rich, they may get some respect. Your grandchildren will be powerful. You’ll be a crass old coot.
or the fact that John D. Rockefeller was furious that Standard Oil got split up despite the stock going up and making him much richer.
It's not so clear what motivates the very rich. If I doubled my income I might go on a really fancy vacation and get that Olympus 4/3 body I've been looking at and the fast 70-300mm lens for my Sony, etc. If Elon Musk changes his income it won't affect his lifestyle. As the leader of a corporation you're supposed to behave as if your utility function of money was linear because that represents your shareholders but a person like Musk might be very happy to spend $40B to advance his power and/or feeling of power.
To clarify you can have power without money, for example initial revolutionaries. Money buys power, and power could convert into money depending the circumstances.
the people with all the firepower won't let you buy your own private military (or develop your own weapons systems without being under their control). The end-of-line power (violence) is a closely guarded monopoly.
But, on the flip side, coercive power cannot stand on its own without money too. The CCP's Politburo know beyond a doubt that they have coercive power over billionaires like Jack Ma, but they try to accommodate these entrepreneurs who help catalyze economic growth & bring the state more foreign revenue/wealth to fund its coercive machine.
America's elected leaders also have power to punish & bring oligarchs to book legally, but they mostly interact symbiotically, exchanging campaign contributions and board seats for preferential treatment, favorable policy, etc.
Putin can order any out-of-line oligarch to be disposed of, but the economic & coercive arms of the Russian State still see themselves as two sides of the same coin.
So, yes: coercive power can still make billionaires face the wall (Russian revolution, etc.) but they mostly prefer to work together. Money and power are a continuum like spacetime.
Same difference with social media too. I thought Twitter was for micro-blogging, LinkedIn for career-networking, Instagram for pictures, and youtube for video-sharing etc. Now everything boiled down to just a feed of pictures, videos and text. So much for a "network", graph theory, research, ...
Even if we assume you're correct and every company's true mission is to maximize power and money, the stated mission is still useful in helping us understand how they plan to do this.
The questions in the original comment were really about the "how", and are still worth considering.
The consistency of a mission statement? Are you guys for real?
To be clear: I'm not arguing that everyone at OpenAI or Meta is a bad person, I don't think that's true. Most of their employees are probably normal people. But seriously, you have to tell me what you guys are smoking if a mission statement causes you to update in any direction whatsoever. I can hardly think of anything more devoid of content.
To be even more specific, the company making money is merely a proxy for the actual goal: increased valuation for stockowners. Subtle but very significant difference
Because a CEO with happy shareholders has more power. The shareholder value thing is a sop, and sometimes a dangerous one.
We keep trying to progressively tax money in the US to reduce the social imbalance. We can’t figure out how to tax power and the people with power like it that way. If you have power you can get money. But it’s also relatively straightforward to arrange to keep the money that you have.
I mean...what you say is not, in the face of it, false; however...
For the past few decades, the ways and the degree to which we have been genuinely trying (at the government level) to "progressively tax money" in the US have been failing and falling, respectively.
If we were genuinely serious about the kind of progressive taxation you're talking about, capital gains taxes (and other kinds of taxes on non-labor income) would be much, much higher than standard income tax. As it stands, the reverse is true.
Why care what they say their mission is? Its clearly to be on top of a possible AI-wave and become or remain a huge company in the future, increasing value for their stock owners. Everything else is BS.
>Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection?
No, Facebook's strategy has always been the inverse of this. When they support technologies like this they're 'commoditizing the complement', they're driving the commercial value of the thing they don't have to zero so the thing they actually do sell (a human network) differentiates them. Same reason they're quite big on open source, it eliminates their biggest competitors advantages.
> - Meta's mission is to build the future of human connection and the technology that makes it possible
Meta arguably achieved this with the initial versions of their products, but even AI aside, they're mostly disconnecting humans now. I post much less on Instagram and Facebook now that they almost never show my content to my own friends or followers, and show them ads and influencer crap instead, so it's basically talking to a wall in an app. Add to this that companies like Meta are all forcing PIP quotas and mass layoffs which in turn causes everyone in my social circle to work 996.
So they have not only taken away online connections to real humans, they have ALSO taken away offline connections to real humans because nobody has time to meet in real life anymore. Win-win for them, I guess.
i've been wondering this for some time as well. what's it all for? the only product i see in their lineup that seems obvious is the meta glasses.
Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.
The traditional way of responding to this is the usual collective emulation of Struggle Sessions but I can easily come up with a couple of plausible answers for you:
* LLM translation is far better than any other kind of translation. Inter-language communication is obviously directly related to human connection.
* Diffusion models allow people to express themselves in new ways. People use image macros and image memes to communicate already.
In fact, I am disappointed that no one has the imagination to do this. I get it. You guys all want to cosplay as oppressed Marxist-Leninists having defoliants dropped on you by United Fruit Corporation. But you could at least try the mildest attempt at exercising your minds.
Where are all these genius American AI scientists that they should hire instead? Pull up a list of the top 1000 most cited AI research papers published in the last decade and look at the authors. You'll find them full of names from China, India, Russia, Eastern Europe, Israel. They are the ones getting jobs. This isn't discrimination, just reality.
I'm Indian -- I think a lot of this comes from the fact that the Brits left such a horrible legal system behind that it's hard to trust it to resolve disputes well. So people default to family/community trust relations.
I've been lucky to work in high-quality teams where nepotism hasn't been a concern, but I do understand where it's coming from (bad as it is).
Honestly I find this kind of thinking too narrow. It’s not a Satya problem, nor a Shengjia problem, it’s a systemic problem where people from most regions of the world overtly practice illegal workplace discrimination in the U.S., and the American government at all levels is not equipped to prosecute the malfeasance. Not 1 day ago I completed a systemic bias training module mandated by the State of California to keep current with a professional certification. All of the examples were coded as straight white males doing something bad to another group (“acting cold to people of color”, “preferring not to work with non-native English speakers”, “not promoting women with young children”)
I get it though. Big tech's HR sucks big time. You need super intelligent people for this kind of work. You can't have incompetent people with PhDs holding back the real brains.
I don't know how it's possible that companies like Meta could get away with having non-technical people as HR. They need all their HR people to be top software engineers.
You need coding geniuses just to be doing the hiring... And I don't mean people who can solve leetcode puzzles quickly. You need people with a proven track record solving real, difficult problems. Fully completed projects. And that's just to qualify for the HR team IMO... Not worthy enough to be contributing code to such important project. If you don't treat the project as if it is highly important, it won't be.
Normal HR people just fill the company with political nonsense.
I totally get it Yann LeCun and FAIR want to focus on next gen research, but they seem almost proud of their distance from product development at Meta. Isn't that a convenient way to avoid accountability? Meta has published a ton of great work, but appears to be losing economically in AI. It's understandable that the executive team wants change.
Layoffs are often how a company manages its stock price. Company gives guidance, is likely to miss, lays off a bunch, claims the savings, meets guidance, keeps stock looking good.
How long before it finally clicks that facebook/Meta was never more than white privilege, timing, luck, and dumb money? That the concepts were not new and Zuck isn't that innovative or visionary? This goes for all the "Tech Luminaries" and Musk Rats...
This is in addition to another round of cuts from a couple months ago that didn't make the news. I heard from somebody who joined Meta in an AI-related division at a senior position a few months ago. Said within a couple of months of joining, almost his entire department was gutted -- VPs, directors, manager, engineers -- and he was one of the very few left.
Not sure of the exact numbers, given it was within a single department, the cuts were not big but definitely went swift and deep.
As an outside observer, Zuck has always been a sociopath, but he was also always very calculated. However over the past few months he seems to be getting much more erratic and, well... "Elon-y" with this GenAI thing. I wonder what he's seeing that is causing this behavior.
This says more about Meta than about where AI is heading. For me personally my work, my life has transformed dramatically, literally, since 2022, since OpenAI launched ChatGPT and what followed then. I feel like having a dozen assistants who help me levarage my skills exponentially, do tedious work, do things I never had the time and resources to do. I see it in my salary, in the results I produce, in the projects I can accept and do.
My life after LLMs is not the same anymore. Literally.
It's only 600 so far... Rumors were that it was going to be in the thousands. We'll see how long they can hold off. Alexandr really wants to get rid of many more people.
Everyone here ragging on Wang but I can never figure out how some people grow the balls to work themselves into such high positions. Like, if I met with Zuck, I think he’d be unimpressed and bored within 2 minutes. Yet this guy (and others like him) can convince Zuck to give him a few billion dollars.
There is a language these people speak, and some physical primate posturing they do, which my brain just can’t emulate.
This is actually really interesting because I’ve never actually seen anything coming out of Lecun‘s group that made it into production
that does not mean that nothing did, but this indicates to me that FAIR work never actually made it out of the lab and basically everything that Lecun has been working on has been shelved
That makes sense to me as he and most of the AI divas have focused on their “Governor of AI” roles instead of innovating in production
I’ll be interested to see how this shakes out for who is leading AI at Meta going forward
I'm confused about Meta AI in general. It's _horrible_ compared to every other LLM I use. Customer ingress is weird to me too - do they expect people to use Facebook chat (Messenger) to talk to Meta AI mainly? I've tried it on messenger, the website, and have run llama locally.
My (completely uninformed, spitballing) thinking is that Facebook doesn't care that much about AI for end users. THe benefit here is for their ads business, etc.
Unclear if they have been successful at all so far.
> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.
That's kinda wild. I'm kinda shocked they put it in writing.
My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).
What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.
Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.
Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.
I think this is the steel man of “founder mode” conversation that people were obsessed with a year ago. People obsessed with “process” who are happy if nothing is accomplished because at least no policy was violated, ignoring the fact that policies were written by humans to serve the company’s goals.
In 2017 Google literally gave us transformer architecture all current AI boom is based on.
Isn't that "move fast and break things" by another name?
(I have more than once had to explain to a lawyer that their understanding was wrong, and they were imposing unnecessary extra practice)
Why would the lawyer need to talk to my manager? I'm the person getting the job done, my manager is there to support me and to resolve conflicts in case of escalations. In the meantime, I'm going to explain patiently to the lawyer that the terms they are insisting on aren't necessary (I always listen carefully to what the lawyer says).
I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.
Yes I’m still bitter.
lol, that works well until a big issue occurs in production
The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.
You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.
Its not the job of employees to bear this burden - if you have visionary leadership at the helm, they should be the ones absorbing this pressure. And thats what is missing.
The reality is folks like Zuck were never visionaries. Lets not derail the thread but a) he stole the idea for facebook b) the continued success of Meta comes from its numerous acquisitions and copying its competitors, and not from organic product innovation. Zuckerberg and Musk share a lot more in common than both would like to admit.
This is, IMO, a leadership-level problem. You'll always (hopefully) have an engineering manager or staff-level engineer capable of keeping the dev team in check.
I say it's a leadership problem because "partnering with X", "getting Y to market first", and "Z fits our current... strategy" seem to take precedence over what customers really ask for and what engineering is suggesting actually works.
Userneed is very much second to company priority metrics.
It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.
Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.
The bureaucracy crew will win, they are playing the real game, everybody else is wasting effort on doing things like engineering.
The process is inevitable, but whatever. It is just part of our society, companies age and die. Sometimes they course correct temporarily but nothing is permanent.
I also think on this topic specifically there is so much labor going into low/no ROI projects and it's becoming obvious. That's just like my opinion though, should Meta even be inventing AI or just leveraging other AI products? I think that's likely an open question in their Org - this may be a hint to their latest thoughts on it.
This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.
The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.
Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.
Why not just move people around you may ask?
Possibly: different skill requirements
More likely: people in charge change, and they usually want “their people” around
Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money
Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.
If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?
Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!
But it is just a little toy, Facebook is looking for their next billion dollar idea; that’s not it.
Even tho the creator says LLMS aren't going in that direction it's a fun read, especially when you're talking about VR + AI.
Author's note from late 2023: https://www.fimfiction.net/blog/1026612/friendship-is-optima...
This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.
“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs
Meta is not even in the picture
Alexa?
Few tools are ok with sometimes right, sometimes wrong output.
If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?
But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.
Everytime I drop the AI and manually write my own code it is just better.
Maybe they should reduce it all to Wang, he can make all decisions with the impact and scope he is truly capable of.
And wanting that is not automatically a bad thing. The fallacy of linearly scaling man-hour-output applies in both directions, otherwise it's illogical. We can't make fun of claims that 100 people can produce a product 10 times as fast as 10 people, but then turn around and automatically assume that layoffs lead to overburdened employees if the scope doesn't change, because now they'll have to do 10 times as much work.
Now they can, often in practice. But for that claim to hold more evidence is needed about the specifics of who is laid off and what projects have been culled, which we certainly don't seem to have here.
I will always pour one out for the fellow wage slave (more for the people who suddenly lost a job), but I am admittedly a bit less sympathetic to those with in demand skills receiving top tier compensation. More for the teachers, nurses, DOGEd FDA employees, whatever who was only ever taking in a more modest wage, but is continually expected to do more with less.
Management cutting headcount and making the drones work harder is not a unique story to Facebook.
(For me, I found the limit was somewhere around 70 hrs/week - beyond that, the mistakes I made negated any progress I made. This also left me pretty burnt out after about a year, so the sustainable long-term hourly work rate is lower)
We're talking about overworked AI engineers and researchers who've been berated for management failures and told they need to do 5x more (before today). The money isn't just handed out for slacking, it's in exchange for an eye-watering amount of work, and now more is expected of them.
Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.
Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.
If they want to innovate then they need to have small teams of people focused on the same problem space, and very rarely talking to each other.
then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.
Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.
Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
Pure technologists and MBA folks dont have a visionary bone in their body. I always find the Steve Jobs criticism re. his technical contributions hilarious. That wasnt his job. Its much easier to execute on the technical stuff, when theres someone there who is leading the charge on the vision.
Alas, the burden falls on the little guys. Especially in this kind of labor market.
On what planet is it OK to describe your employees as "load bearing?"
It's a good way to get your SLK keyed.
"We want to cut costs and increase the burden on the remaining high-performers"
> "You can't expect to just throw money at an algorithm and beat one of the largest tech companies in the world"
A small adjustment to make for our circus: s/one of//
Anecdotally, this is a problem at Meta as described by my friends there.
Overshooting by 600 people sounds a lot like gross failure. Is someone going to take responsibilities for it? Probably not. That person's job is safe.
New leader comes in and gets rid of the old team, putting his own preferred people in positions of power.
I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.
(I can see why someone might think that’s a charitable interpretation)
I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
Why not both?
It's coming any day now!
> "... each person will be more load-bearing and have more scope and impact,” Wang writes
It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:
> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.
I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.
Well, all the people with no jobs are going to need something to fill their time.
They really need that business model.
For ChatGPT I have a lower bar because it is easier to avoid.
Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.
Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.
I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.
I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?
I just don't understand how smart people think this is going to work out at all.
The previous couple of crops of smart people grew up in a world that could still easily be improved, and they set about doing just that. The current crop of smart people grew up in a world with a very large number of people and they want a bigger slice of it. There are only a couple of solutions to that and it's pretty clear to me which way they've picked.
They don't need to 'keep the economy running' for that much longer to get their way.
Thats the thing, they arent looking at the big picture or long term. They are looking to get a slice of the pie after seeing companies like Tesla and Uber milk the market for billions. In a market where everything from shelter to food is blowing up in cost, people struggle to provide/have a life similar to their parents.
There is a whole field of research called post scarcity economy. https://en.wikipedia.org/wiki/Post-scarcity
tldr; it's not as bad as you think, but the transition is going to be bad (for some of us).
I've read that before:
“Many men of course became extremely rich, but this was perfectly natural and nothing to be ashamed of because no one was really poor – at least no one worth speaking of.”
I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?
Add that to “corporate personhood” and what do we get?
Probably automated themselves out of their roles as "AGI" and now super intelligence "ASI" has been "achieved internally".
The billion dollar question is.. where is it?
https://www.datacenterdynamics.com/en/news/meta-brings-data-...
But maybe not:
https://open.substack.com/pub/datacenterrichness/p/meta-empt...
Other options are Ohio or Louisiana.
It certainly feels like the end of an era to see Meta increasingly diminishing the role of FAIR. Strategically it might not have been ideal for LeCun to be so openly and aggressively critical of the this current generation of AI (even if history will very likely prove him correct).
More like "scientific research regurgitators".
The AI party is coming to an end. Those without clear ROI are ripe for the chopping block.
It's really time for this bubble to collapse so we can go back to working on things that actually make sense rather than ticking boxes.
Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).
Meta is paying Anthropic to give its devs access to Claude because it's that much better than their internal models. You think that's a marketing problem?
I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.
The biggest things were those six months before people figured out how O1 worked and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.
The models have increased greatly in capabilities, but the competitors have simply kept up, and it's not apparently that they won't continue to do that. Furthermore, the breakthroughs-- i.e. fundamentally better models, can happen anywhere where people can and do try out new architectures, and that can happen in surprisingly small places.
It's mostly about culture and being willing to experiment on something which is often very thankless since most radical ideas do not give an improvement.
But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.
But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?
Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.
This. 100% This.
As an early stage VC, the foundational model story is largely over, and understanding how to apply models to applications or how to protect applications leveraging models is the name of the game now.
> Maybe adversarial agents will see improvements...
There is increased appetite now to invest in those models that are taking a reasoning and RL problem.
Just like Adam Neuman who was reinventing the concept of workspaces as a community.
Just like Elizabeth Holmns who was revolutionizing blood testing.
Just like SBF who pioneered a new model for altruistic capitalism.
And so many others.
Beware of prophets selling you on the idea that they alone can do something nobody has ever done before.
Oh, wow. I think you meant altruistic capitalism.
I mostly agree with this but make an exception for MetaAI which seems egregiously bad compared to the others I use regularly (Anthropic's, Google's, OpenAI's)
https://bookface.ycombinator.com/company/30776/jobs
https://www.ycombinator.com/companies/magnetic/jobs/77FvOwO-...
And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.
Many here were in LLMs.
Last survey I saw said regression was still the most-used technique with SVM's more used than LLM's. I figured combining those types of tools with LLM tech, esp for specifying or training them, is a better investment than replacing them. There's people doing that.
Now, I could see Facebook itself thinking LLM's are the most important if they're writing all the code, tests, diagnostics, doing moderation, customer service, etc. Essentially, running the operational side of what generates revenue. They're also willing to spend a lot of money to make that good enough for their use case.
That said, their financial bets make me wonder if they're driven by imagination more than hard analyses.
How gracious.
Other AI companies will soon follow.
And maybe solve some of the actual problems out there that need addressing.
- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.
- Google's mission is to organise the world's information and make it universally accessible and useful.
- Meta's mission is to build the future of human connection and the technology that makes it possible.
Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.
EDIT: everyone commenting that mission statements are PR fluff. Fine. What is a productive way they can use LLMs in any of their flagship products today?
It's kind of the other way around, isn't it? Meta has the posts of a billion users with which to train LLMs, so they're in a better position to make them than most others. As for what to do with it then, isn't that that pretty similar no matter who you are?
On top of that, sites are having problems with people going to LLMs instead of going to the site, e.g. why ask a question on Facebook to get an answer tomorrow if ChatGPT can tell you right now? So they either need to get in on the new thing which is threatening to eat their lunch or they need to commoditize it sufficiently that there isn't a major incumbent competitor posed to sit between the users and themselves extracting a margin from the users, or worse, from themselves for directing user traffic their way instead of to whoever outbids them.
That https://character.ai is so enormously popular with people who are under the age of 25 suggests that this is the future. And Meta is certainly looking at https://character.ai with great interest, but also with concern. https://character.ai represents a threat to Meta.
Years ago, when Meta felt that Instagram was a threat, they bought Instagram.
If they don't think they can buy https://character.ai then they need to develop their own version of it.
They have the tech, if they still fail it's just marketing.
Then there's also the reputational harm if Meta acquires them and the journalists write about the bad things that happen on that platform.
It is really depressing how corporations don't look like they are run by humans.
Yes. The further up the ladder you go, the more this is pounded into your head. I was in a few Big Tech and this is how you write your self-assessment. "Increased $$$ revenue due to higher user engagement, shipped xxx product that generated xxx sales etc".
If you're level 1/2 engineer, sure. You get sold on the company mission. But once you're in senior level, you are exposed to how the product/features will maximize the company's financial and market position. How each engineer's hours are directly benefiting the company.
> Were you ever part of a team and felt good about the work you were doing together? Maybe some startups or non-profits can have this (like Wikipedia or Craigslist), but definitely not OpenAI, Google and Meta.
In fact, they are the #1 or #2 place in the world to sell an ad depending on who you ask. If the future turns out to be LLM-driven, all that ad-money is going to go to OpenAI or worse to Google; leaving Zuck with no revenue.
So why are they after AI? Because they are in the business of selling eyeballs placement and LLM becoming the defacto platform would eat into their margins.
Meta's actual mission is to keep people on the platform and to do what can be done so users do not leave the platform. I found out that from this perspective Meta's actions make more sense.
That said I am not cynical about mission statements like that per se, I do think that making large organizations work towards a common goal is a very difficult problem. Unless you're going to have a hierarchical command and control system in place, you need to do it through shared culture and mission.
Ads are their product mostly, though they are also trying to get into consumer hardware.
The critical word in there is… Never mind. If you can’t already see it, nothing I can say will make you see it.
Side note, has black mirror done this yet or are they still stuck on "what if you are the computer" for the 34th time?
- Google wants to know what everyone is looking for.
- Facebook wants to know what everyone is saying.
Let me summarise their real missions:
1. Power and money
2. Power and money
3. Power and money
How does AI help them make money and gain more power?
I can give you a few ways...
Money is a measure of power, but it is not in fact power.
See https://hbr.org/2008/02/the-founders-dilemma
or the fact that John D. Rockefeller was furious that Standard Oil got split up despite the stock going up and making him much richer.
It's not so clear what motivates the very rich. If I doubled my income I might go on a really fancy vacation and get that Olympus 4/3 body I've been looking at and the fast 70-300mm lens for my Sony, etc. If Elon Musk changes his income it won't affect his lifestyle. As the leader of a corporation you're supposed to behave as if your utility function of money was linear because that represents your shareholders but a person like Musk might be very happy to spend $40B to advance his power and/or feeling of power.
America's elected leaders also have power to punish & bring oligarchs to book legally, but they mostly interact symbiotically, exchanging campaign contributions and board seats for preferential treatment, favorable policy, etc.
Putin can order any out-of-line oligarch to be disposed of, but the economic & coercive arms of the Russian State still see themselves as two sides of the same coin.
So, yes: coercive power can still make billionaires face the wall (Russian revolution, etc.) but they mostly prefer to work together. Money and power are a continuum like spacetime.
The questions in the original comment were really about the "how", and are still worth considering.
To be clear: I'm not arguing that everyone at OpenAI or Meta is a bad person, I don't think that's true. Most of their employees are probably normal people. But seriously, you have to tell me what you guys are smoking if a mission statement causes you to update in any direction whatsoever. I can hardly think of anything more devoid of content.
But even Meta's PR dept seems clueless on answering "How Meta is going to get more Power and Money through AI"
We keep trying to progressively tax money in the US to reduce the social imbalance. We can’t figure out how to tax power and the people with power like it that way. If you have power you can get money. But it’s also relatively straightforward to arrange to keep the money that you have.
But they don’t really need to.
For the past few decades, the ways and the degree to which we have been genuinely trying (at the government level) to "progressively tax money" in the US have been failing and falling, respectively.
If we were genuinely serious about the kind of progressive taxation you're talking about, capital gains taxes (and other kinds of taxes on non-labor income) would be much, much higher than standard income tax. As it stands, the reverse is true.
Just top of the head answers.
Their mission is to make money. For the principals
No, Facebook's strategy has always been the inverse of this. When they support technologies like this they're 'commoditizing the complement', they're driving the commercial value of the thing they don't have to zero so the thing they actually do sell (a human network) differentiates them. Same reason they're quite big on open source, it eliminates their biggest competitors advantages.
Meta arguably achieved this with the initial versions of their products, but even AI aside, they're mostly disconnecting humans now. I post much less on Instagram and Facebook now that they almost never show my content to my own friends or followers, and show them ads and influencer crap instead, so it's basically talking to a wall in an app. Add to this that companies like Meta are all forcing PIP quotas and mass layoffs which in turn causes everyone in my social circle to work 996.
So they have not only taken away online connections to real humans, they have ALSO taken away offline connections to real humans because nobody has time to meet in real life anymore. Win-win for them, I guess.
Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.
After all it is clear that if those were their actual missions they would be doing very different work.
* LLM translation is far better than any other kind of translation. Inter-language communication is obviously directly related to human connection.
* Diffusion models allow people to express themselves in new ways. People use image macros and image memes to communicate already.
In fact, I am disappointed that no one has the imagination to do this. I get it. You guys all want to cosplay as oppressed Marxist-Leninists having defoliants dropped on you by United Fruit Corporation. But you could at least try the mildest attempt at exercising your minds.
The only thing worse than a bubble? Two bubbles.
I've been lucky to work in high-quality teams where nepotism hasn't been a concern, but I do understand where it's coming from (bad as it is).
I don't know how it's possible that companies like Meta could get away with having non-technical people as HR. They need all their HR people to be top software engineers.
You need coding geniuses just to be doing the hiring... And I don't mean people who can solve leetcode puzzles quickly. You need people with a proven track record solving real, difficult problems. Fully completed projects. And that's just to qualify for the HR team IMO... Not worthy enough to be contributing code to such important project. If you don't treat the project as if it is highly important, it won't be.
Normal HR people just fill the company with political nonsense.
Maybe they should have just announced the layoffs without specifying the division?
Not sure of the exact numbers, given it was within a single department, the cuts were not big but definitely went swift and deep.
As an outside observer, Zuck has always been a sociopath, but he was also always very calculated. However over the past few months he seems to be getting much more erratic and, well... "Elon-y" with this GenAI thing. I wonder what he's seeing that is causing this behavior.
(Crossposted from dupe at https://news.ycombinator.com/item?id=45669719)
If you're not swimming in their river, or you weren't responsible for their spill, who cares?
But it spreads into other rivers and suddenly you have a mess
In this analogy the chemical spill - for those who don't have Meta accounts, or sorry, guess you do, we've made one for you, so sorry - is valuation
https://news.ycombinator.com/item?id=7211388
I see you, FAANG employees.
My life after LLMs is not the same anymore. Literally.
There is a language these people speak, and some physical primate posturing they do, which my brain just can’t emulate.
that does not mean that nothing did, but this indicates to me that FAIR work never actually made it out of the lab and basically everything that Lecun has been working on has been shelved
That makes sense to me as he and most of the AI divas have focused on their “Governor of AI” roles instead of innovating in production
I’ll be interested to see how this shakes out for who is leading AI at Meta going forward
Alexandr Wang
My (completely uninformed, spitballing) thinking is that Facebook doesn't care that much about AI for end users. THe benefit here is for their ads business, etc.
Unclear if they have been successful at all so far.