• Tiefling IRL@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    127
    ·
    11 hours ago

    60k isn’t that much, I frequently run scripts against multiple hundreds of thousands at work. Wtf is he doing? Did he duplicate the government database onto his 2015 MacBook Air?

    • 4am@lemm.ee
      link
      fedilink
      arrow-up
      69
      ·
      10 hours ago

      A TI-86 can query 60k rows without breaking a sweat.

      If his hard drive overheated from that, he is doing something very wrong, very unhygienic, or both.

    • socsa@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 hours ago

      I’ve run searches over 60k lines of raw JSON on a 2015 MacBook air without any problems.

    • arotrios@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 hours ago

      Seriously - I can parse multiple tables of 5+ million row each… in EXCEL… on a 10 year old desktop and not have the fan even speed up. Even the legacy Access database I work with handles multiple million+ row tables better than that.

      Sounds like the kid was running his AI hamsters too hard and they died of exhaustion.

        • arotrios@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          You’re correct - the standard tabs can only hold roughly 1.2 million rows.

          The way to get around that limitation is to use the Data Model within Power Pivot:

          It can accept all of the data connections a standard Power Query can (ODBC, Sharepoint, Access, etc):

          You build the connection in Power Pivot to your big tables and it will pull in a preview. If needed, you can build relationship between tables with the Relationship Manager. You can also use DAX to build formulas just like in a regular Excel tab (very similar to Visual Basic). You can then run Pivot Tables and charts against the Data Model to pull out the subsets of data you want to look at.

          The load times are pretty decent - usually it takes 2-3 minutes to pull a table of 4 million rows from an SQL database over ODBC, but your results may vary depending on datasource. It can get memory intensive, so I recommend a machine with a decent amount of RAM if you’re going to build anything for professional use.

          The nice thing about building it out this way (as opposed to using independent Power Queries to bring out your data subsets) is that it’s a one-button refresh, with most of the logic and formulas hidden back within the Data Model, so it’s a nice way to build reports for end-users that’s harder for them to fuck up by deleting a formula or hiding a column.

          • driving_crooner@lemmy.eco.br
            link
            fedilink
            arrow-up
            1
            ·
            5 hours ago

            Oh yes, I remember using power query for a few months once I started working with bigger databases, but I saw that moving to Python would be better carrer wise and never came back to excel to do actual work (but at the end everything get exported to excel)

    • IsoKiero@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      9 hours ago

      Don’t know what Elmos minions are doing, but I’ve written code at least equally unefficient. It was quite a few years ago (the code was in written in perl) and I at least want to think that I’m better now (but I’m not paid to code anymore). The task was to pull in data from a CSV (or something like that, as I mentioned, it’s been a while) and it needed conversion to XML (or something similar).

      The idea behind my code was that you could just configure which fields you want from arbitary source data and on where to place them on the whatever supported destination format. I still think that the basic idea behind that project is pretty neat, just throw in whatever you happen to have and have something completely else out of the other end. And it worked as it should. It was just stupidly hungry for memory. 20k entries would eat up several gigabytes of memory from a workstation (and back then it was premium to have even 16G around) and it was also freaking slow to run (like 0.2 - 0.5 seconds per entry).

      But even then I didn’t need to tweet that my hard drive is overheating. I well understood that my code is just bad and I even improved it a bit here and there, but it was still so very slow and used ridiculous amounts of RAM. The project was pretty neat and when you had few hundred items to process at a time it was even pretty good, there was companies who relied on that code and paid for support. It just totally broke down with even a slightly bigger datasets.

      But, as I already mentioned, my hard drive didn’t overheat on that load.

    • vga@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      I mean if we were to sort of steelman this thing, there sure can be database relations and queries that hit only 60k rows but are still hteavy as fuck.