• 8 Posts
  • 3.24K Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • Cruze is a different thing. Cruise did have a very public incident where a pedestrian was hit and dragged. But, they had driven more than 1 million miles without that happening.

    And while these vehicles do have human monitors, they’re mostly that: monitors. The cars are mostly driving themselves.



  • As much as people hate short sellers, this is how they’re useful. They help make the prices of things in the stock market more realistic by finding the companies that are overvalued, short-selling them, and then telling the world how badly overvalued the companies are so that their stock returns to a more realistic price.




  • What’s really awful is that it seems like they’ve trained these LLMs to be “helpful”, which means to say “yes” as much as possible. But, that’s the case even when the true answer is “no”.

    I was searching for something recently. Most people with similar searches were trying to do X, I was trying to do Y which was different in subtle but important differences. There are tons of resources out there showing how to do X, but none showing how to do Y. The “AI” answer gave me directions for doing Y by showing me the procedure for doing X, with certain parts changed so that they match Y instead. It doesn’t work like that.

    Like, imagine a recipe that not just uses sugar but that relies on key properties of sugar to work, something like caramel. Search for “how do I make caramel with stevia instead of sugar” and the AI gives you the recipe for making caramel with sugar, just with “stevia” replacing every mention of “sugar” in the original recipe. Absolutely useless, right? The correct answer would be “You can’t do that, the properties are just too different.” But, an LLM knows nothing, so it is happy just to substitute words in a recipe and be “helpful”.



  • Nobody who knows anything about self-driving would give Tesla any credit for that.

    The fact that Tesla’s slight improvement to lane-assist is labelled as “full self-driving” by Tesla has really damaged the reputation of the industry. GM is way ahead of them because of the knowledge they have in-house thanks to Cruise. And Cruise gave up because they were so far behind Waymo.






  • Toyota is a car company that makes solid cars with an anonymous CEO. It has a P/E ratio of about 7.5 and a market cap of about $230 billion.

    Tesla is a car company that makes a few solid cars, and one absolutely ridiculous vehicle that is the laughing stock of trucks. It has a CEO that most of the world has an intensely negative opinion of, and that negative opinion is highest among the people wealthy enough to consider buying a Tesla. It has a P/E ratio of approximately 175 and a market cap of $918 billion.

    If Tesla’s P/E ratio fell to something more realistic, let’s be extremely generous and say 7, it would be worth only $37 billion, about 4% of the current value. Realistically, it should be significantly below that. When people are regularly spraypainting swastikas on your vehicles, your P/E ratio should be lower than other automobile manufacturers because it’s a sign that your future growth prospects may be slightly impaired by the unwillingness of people to buy your vehicles.

    When Tesla’s value does collapse (and I’m convinced it’s a “when” not an “if”) it will crater Musk’s net worth, because at least half of it is due to his Tesla stock.



  • Yup. Canada already had a treaty with the US in NAFTA. But, in Trump’s first term, it was pressured into abandoning that and signing a new treaty, the USMCA. The whole point of these treaties was to provide tariff-free access to US markets (and vice versa). Canada had to give up a lot to get that guarantee.

    Now Trump is violating his own new USMCA treaty, who knows why. Most likely it’s just a grudge against Trudeau. So, Canada gave up on certain rights to get the treaty, and now the treaty is being ignored by the guy who pushed it. The smart move on Canada’s part would be to stop enforcing all the IP terms in the deal, which are all hugely favourable to Hollywood at the expense of Canada’s domestic industries.




  • merctoProgrammer Humor@programming.devTradeoffs
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    I believe that, because test scripts tend to involve a lot of very repetitive code, and it’s normally pretty easy to read that code.

    Still, I would bet that out of 1000 tests it writes, at least 1 will introduce a subtle logic bug.

    Imagine you hired an intern for the summer and asked them to write 1000 tests for your software. The intern doesn’t know the programming language you use, doesn’t understand the project, but is really, really good at Googling stuff. They search online for tests matching what you need, copy what they find and paste it into their editor. They may not understand the programming language you use, but they’ve read the style guide back to front. They make sure their code builds and runs without errors. They are meticulous when it comes to copying over the comments from the tests they find and they make sure the tests are named in a consistent way. Eventually you receive a CL with 1000 tests. You’d like to thank the intern and ask them a few questions, but they’ve already gone back to school without leaving any contact info.

    Do you have 1000 reliable tests?


  • merctoProgrammer Humor@programming.devTradeoffs
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    That’s the problem. Maybe it is.

    Maybe the code the AI wrote works perfectly. Maybe it just looks like how perfectly working code is supposed to look, but doesn’t actually do what it’s supposed to do.

    To get to the train tracks on the right, you would normally have dozens of engineers working over probably decades, learning how the old system worked and adding to it. If you’re a new engineer and you have to work on it, you might be able to talk to the people who worked on it before you and find out how their design was supposed to work. There may be notes or designs generated as they worked on it. And so-on.

    It might take you months to fully understand the system, but whenever there’s something confusing you can find someone and ask questions like “Where did you…?” and “How does it…?” and “When does this…?”

    Now, imagine you work at a railroad and show up to work one day and there’s this whole mess in front of you that was laid down overnight by some magic railroad-laying machine. Along with a certificate the machine printed that says that the design works. You can’t ask the machine any questions about what it did. Or, maybe you can ask questions, but those questions are pretty useless because the machine isn’t designed to remember what it did (although it might lie to you and claim that it remembers what it did).

    So, what do you do, just start running trains through those tracks, assured that the machine probably got things right? Or, do you start trying to understand every possible path through those tracks from first principles?