Google scientists have modelled a fragment of the human brain at nanoscale resolution, revealing cells with previously undiscovered features.

  • MonkderDritte@feddit.de
    link
    fedilink
    arrow-up
    7
    ·
    6 months ago

    then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D.

    Why AI for that?

    • Gamma@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      6 months ago

      ML is pretty common when working with a ton of data, from another article:

      To make a map this finely detailed, the team had to cut the tissue sample into 5,000 slices and scan them with a high-speed electron microscope. Then they used a machine-learning model to help electronically stitch the slices back together and label the features. The raw data set alone took up 1.4 petabytes. “It’s probably the most computer-intensive work in all of neuroscience,” says Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the research. “There is a Herculean amount of work involved.”

      Unfortunately techbros have poisoned the term AI 🥲

      Source: Google helped make an exquisitely detailed map of a tiny piece of the human brain