Change, Technically

Open science: hope is other people

Season 1 Episode 5

Much like open source software, open science is a path to distributed collaboration. By sharing the data from experiments and investigations open and available, scientists can multiply impact and discovery for teams they've never even met.

Our guest, Saskia de Vries, talks to us about her work at the Allen Institute, including accelerating the pace of discovery by making scientific data available to everyone who wants it.

Credits
Saskia de Vries, guest
Ashley Juavinett, host + producer
Cat Hicks, host + producer
Danilo Campos, producer + editor

You can learn more about the Allen Institute on their website: https://alleninstitute.org/

Read some of Saskia's recent thoughts on sharing data in neuroscience here: https://elifesciences.org/articles/85550

The CRCNS open data repository that Saskia mentions: https://crcns.org/

Read about the FAIR principles for scientific data management and stewardship: https://www.nature.com/articles/sdata201618 

Learn more about Ashley:


Learn more about Cat:

Saskia:

I like to think about open science is really that it is a form of collaboration. Sometimes it's a collaboration where, we are actively collaborating with each other. Sometimes it's a collaboration where we're building off of one another's work without having a conversation about it. I think we're kind of at a point where it's really unlikely that the person who's driving the forefront of the data acquisition and the experimental techniques is going to be the same person who's going to be able to drive the forefront of the analytical and mathematical techniques.

Ashley:

This is Saskia DeVries. Saskia is the associate director of data and outreach at the Allen Institute and is overall just an amazing scientist. Thinks about open science and there's a lot of parallels here for open source software, I think too.

Cat:

I've loved listening to this because it made me think about the future of human knowledge, the stereotypes and beliefs that are holding us back, like the idea that science just comes from one person locked in a lab. Really, this is like a beacon of light and hope at this moment right now about thinking of our collective human effort that can compound on each other, so, so excited to talk to Saskia.

Saskia:

My academic journey was started off kind of as a typical academic, uh, trajectory. I my undergraduate work was in molecular biophysics and biochemistry. And I spent a couple years as a tech in a lab. I did a PhD in neuroscience. Um, and, uh, that, and I don't know, it was, it, it felt at the time like it, just a normal PhD or in retrospect is also feels like a normal PhD. I was in a lab, uh, I had a project. Everybody's project was pretty like distinct. We were friendly. We communicated with each other, but it wasn't necessarily like a collaborative, um, process. Um, and then I went and did a postdoc. All of the work that I did kind of post college was all in the visual system. So it's really interesting and excited about how the outside world gets into our brain. I kind of quickly realized I didn't want my P. I. S. Job. He would contest this, but I didn't feel like he got to do science, right? I felt like he was running a small business, and if I wanted to run a small business, I should have gone to business school. And I actually kind of feel like this is one of the downsides of academia, that, like, the skills you need to run a lab are not the skills that you are training in graduate school and as a postdoc. Yeah, so the Allen Institute where I work, uh, it's a nonprofit research institute that was founded in like 2003 by Paul Allen. Um, and when it first started, it was the Allen Institute for Brain Science, and it was focused on creating a brain atlas. Um, first of the mouse, and then, uh, there was also a human atlas and macaque atlas, I believe. Um, and then the part I was involved with was, uh, it's called the Allen Brain Observatory, which was doing in vivo physiology. So recording activity of cells in the brain while the mouse is working Watching visual stimuli or performing behavioral tasks. Since then, the Institute, there's now an Institute for cell science and Allen Institute for immunology and Allen Institute for neural dynamics, which is where I currently work. Um, but one of the key things about the Institute is that the core principles of our Institute are big science. Team science and open science. It made sense to me how that worked for things like an atlas where you spend a lot of time creating this resource. And now this is a resource that anybody, can access and, and use, at this stage, all these things are digital, but you could imagine an atlas being a book that you just, you know, share with somebody and they can, um, they can peruse it and figure things out and find coordinates for things.

Ashley:

I remember this from, from grad school. Like, yeah, yeah, exactly. It's like this big, like, foot by two foot book, basically. And you, like, pour it open when you're, like, cutting up your mouse brain and you want to know where you are. And you find the page that looks like the thing you're looking at under the microscope. Yeah. So it's very actually physical, yeah.

Saskia:

I came to the Institute. I was doing the, the physiology that was my background. So recording from cells in the mouse brain. Um, and then, yeah, how do we share this with the world? How do we make this a, a, a resource that other people can, can use? Can interact with and engage with and use to do science, right? I think at the time we had like a institute motto of accelerating discoveries. Um, and, uh, like, yeah, what does that look like? Um, and it, it, it kind of quickly became clear to me that this does look really different in the physiology space than it does, um, when you're creating an Atlas or kind of a compendium of, um, tools or, or things like that. Um, and so I've, I've. Based off of my early work in the Brain Observatory, for a while I was leading a research group here, but then, um, a few years ago, I moved into a role where I'm really just focused on how do we share this data, um, and how do we make it, um, most impactful and most accessible for people to reuse it, um, and so are there, what tools do we build around it, how do we organize it, how do we document it, um, kind of all of those, all of those questions,

Ashley:

I love that. And I'm one of those scientists who has benefited tremendously from this because, you know, before you actually, you know, go do an experiment, you might look at the Allen Institute data and see, okay, like, did they do it? What did they find? What can we build off of? It's, it's such an amazing resource. One thing I'm curious about is you said, you know, the goal is accelerating discovery and I feel like, you know, Obviously for humankind, this is a very important thing. Like we want this. So like what are the things that you feel like the Allen Institute is trying to do that really like Get us closer to you know, whatever that discovery is

Saskia:

I think there's a couple of, of aspects to this, um, and I think one place that I'll, I'll maybe emphasize is when we talk about open science just in general, a lot of times those conversations end up talking about data sharing, um, and how do we put data out into the world, um, Which obviously is, is a, a, a major component of open science. But if, if there's no, like, data reuse happening, it kind of doesn't matter. Um, and so the thing that I put a lot of my thought and attention into is how do we facilitate the reuse? Because the ultimate goal here, right, is, um, Like, as you're just, you know, posing that question, right? How do, what is that, what, how, what does it look like to accelerate discovery? Um, and it only really, like, it's only going to do something if people are able to take the things that we're doing and use that to drive their own questions and use that to, um, drive their, their own answers, right? Um, and some, sometimes, like, I actually, the example you gave, I really like, is the one of, um, I have an experiment I want to do, I have a question I want to ask. Can I get some of that information? Can I get some of that work out of stuff that already exists? Right. Instead of having to do a hundred percent of my experiment, if I can constrain the variables of my experiment based off of previous work, I only need funding. I only need time. I only need resources to do the remaining. I forget what percentage I put out there, but if, if, if we, if you can get 20 percent from existing data, you only have to do 80%. Right. Um, and so for me, that's like. One of the ideal examples of Accelerating work is and I think this is what the brain Atlas has done for so many people. It tells them these are the genes that are expressed in your your region, or these are the regions where the gene that you care about is expressed. Go look in those places. It really focuses people's attentions and allows their experiments to be better designed, better constrained and then to fit better into existing, um, Existing data, existing knowledge. Um, so I think

Ashley:

20 percent might have taken someone individually like two years or something like that's all the validation experiments Like, you know, maybe you wanted to we talked about dopamine in a previous episode So I'm gonna like come back to dopamine but like, you know Let's say you just want to know like where dopamine neurons in the brain go like who do they talk to you? You know your first step might be like look at the Allen Atlas look at where the dopamine neurons are and where they where they go You Then you could start there and you could say, okay, we've got X, Y, and Z brain regions. Let's dig in there and see. Yeah. But that would take probably years of someone's PhD

Cat:

something that comes to my mind as I listen to this, like, great chat between scientists, right, from like outside the field is I, I have spent a lot of time thinking about things like, you know, who gets credit for work and, and why is it hard to collaborate? Why is it hard to share? Right? And the, the beautiful team science, open science stuff feels, you know, Um, deeply vital, deeply exciting, but I think that I'm really curious, like, do you see barriers to people's adoption of this way of working? In particular, I'm wondering about things like, you know, does it feel more, like, more real to people if they did the experiments in their own labs? And, you know, is there kind of a conflict about ownership? I don't know even what my question is here, but like, I know that there are struggles getting people to adopt these ways of working, right?

Saskia:

there's like three different ideas I want to touch on here. Um, but I, I think the, the first one is like, how do we get people to adopt this? And there's, again, there's from the data sharing side. And again, this is where so much of the energy and the brain initiative and the NIH is putting requirements on people to share their data. Um, and it's, it's interesting because like the conversations that I'll have with people about whether how to best share data, when to share data, um, why to share data, it used to be that people were, were, had, had strong reasons against it. Often it was like, there's more things I want to do with it. Um, now the thing, the, the rebuttal I hear most often from people is, um, Nobody's really interested in my data, right? Like, it's a lot of work for me to share my data, and

Cat:

Impostor syndrome.

Saskia:

to use it. Well, I don't know that it's imposter syndrome. Like, I think often, like, I've, you know, I've done this really narrowly constrained experiment that is really unique to a really specific thing. I don't see what somebody's going to do with it. The reality is, if you look at the reuse of, of data, so I spent some time looking at, The reuse in CRCNS is computational resources in collaborative resources in computational neuroscience, um, as a repository that the NSF funded. Um, and it's, it's like only 11 percent of the data sets that are in that repository have ever been reused to, to our knowledge, right? Based off of like publications and, um, Um, that's not a perfect record, but, um, you do get this long tail distribution. And so if you really are just like, I just, I'm going to be out on that long tail. Like, is it worth my energy to put it out there? Um, like, I think that's a valid, like a valid concern. Um, And so that's why like pushing a lot on, on making it easier for people to reuse data. Um, and, and, um, sharing and documenting data in a way that my ability to reuse it isn't constrained to the questions that you asked when you first collected it. Um, and so that's, that's like one of the other kind of really important things is that, um, data needs rich metadata and I, I'm going to try and avoid spending the next like hour just talking about metadata, but like, um,

Cat:

We have a pretty

Saskia:

technical

Cat:

audience, so you probably could.

Ashley:

we love getting

Saskia:

meta on

Ashley:

this

Cat:

Rich metadata could be a tag for this show.

Saskia:

It's, well, I don't know if you know that the FAIR standards, right, these are kind of the standards for open science of findable, accessible, interoperable, and reusable. And if you read kind of the documentation about FAIR, everything is about rich metadata and everything is like, All of these principles actually come down to having rich metadata. And then when you kind of look at the metadata that exists for most data, it's very, it's, it's very, uh, anemic, um, to, for really understanding it. Um, but yeah, being able to reuse data in ways that are different from how it was originally, Um, thought of, I think, is it's kind of that crucial piece, and so part of it is sharing data that facilitates that. And part of it's, I think, training scientists to, to, to, it's a new muscle, right? Um, when I was a graduate student, I was taught to think of, like, what's the next experiment that you do to answer this question? And I think there's a training piece of how do I look for data that might let me answer this question, the 20, 30, 50, 40, 100%.

Ashley:

Like, how do I, you know, see, okay, I collected X, Y, and Z, and maybe I have like a set of questions, but, you know, just see the field kind of more broadly and see the possibility in your own data to address other questions in science.

Saskia:

exactly. Yeah.

Cat:

I think it's beautiful, it's like, Really exciting to imagine that your one niche project could also be like the foundation for someone else if we could get into that like Coalitional mindset about it when I think about my own research lab like and the experiments that we run We actually do try to think a lot about you know Could I have a continuously, you know, gathering data on a measure that remains important, no matter what other things we're studying, and then we get like longitudinal insight, you know, and we start to repeat it, and we, you know, we have a lot of that work that's kind of going on in the background of every specific project,

Saskia:

Mm hmm.

Cat:

know, That kind of relies on us just being diligent about that. But I, I always tell my team like reduce, reuse, recycle, you know, like I'm a big fan of like all this intellectual labor that we're spending in the world is so precious and like, try to make it go as much as you can make it go.

Saskia:

Yeah. Absolutely.

Ashley:

Yeah. So that's like, so big thing, number one, then it's just like getting people to think about reuse at all, either other own data or to address questions that they have using other people's data.

Saskia:

I like to think about open science is really that it is a form of collaboration. Um, and, and sometimes it's a collaboration where, um, we are actively collaborating with each other. Um, and sometimes it's a collaboration where we're building off of one another's work without necessarily they're ever, you know, having a conversation about it. Um, and, you know, I think that in neuroscience, particularly right now, You know, we really have in the last decade plus become a really, like, become a big data field. Um, and for, for a number of reasons, a big, big one being the Brain Initiative, right? Like, the NIH put a lot of money into, um, neuroscience research. Um, a lot of that money then went into developing and improving our technologies for collecting data, for recording data. Um, And so we now have kind of this explosion of data, and, you know, one of the things that I often tell, um, scientists and students is that, you know, the experiments I was doing as a graduate student, I was, uh, recording small populations of neurons, probably like 30 to 50 neurons at a time, which was kind of cutting edge, um, back, you know, 20 years ago, and now is not, not cutting edge, right? And so great, like we can record thousands of neurons at a time, but the other side to that equation is that the analysis that I did was like pairwise, uh, correlations between these different sets of 30 neurons, um, that doesn't scale to thousands of neurons, right? Like, and so, like, not only did the. Like data collection techniques changed drastically in the last 20 years, but our analysis techniques have changed drastically in the last 20 years, and I think we're kind of at a point where it's really unlikely that the person who's driving the forefront of the data acquisition and the experimental techniques is going to be the same person who's going to be able to drive the forefront of the analytical and mathematical techniques. And so, like, we have to collaborate. Um, in order for us to really capitalize on all of the amazing data that we are able to collect, um, because maybe I can collect this huge data set and do some really light. surface analysis of it. Um, but like, there's so much more there that my personal mathematical skills can't scratch the surface of. And so I have to be talking to a mathematician, a theoretical scientist, a statistician, in order to really, um, capitalize on these data. And I think this is just true writ large of our, our field right now that our, our molecular techniques, our, um, recording techniques, our analysis techniques, like they are all in such different, uh, domains of expertise that if we're not collaborating both directly and indirectly, like we're just, we're just wasting our time really. Um, and so like we have to have, our field has to be set up to enable that. Sometimes I talk about like the tools that we build for open science are ultimately tools for collaboration. And we're just making that collaboration with everybody. Um, I could shut the doors and just make it for my best friends. Um, But I still need to be able to like move my data from my from my computer to your computer. You still need to be able to open this file and understand what's in it. You have to know what the experiment was on. And so if I'm doing that for my My four best friends and I just opened the door. That's open science.

Ashley:

Yeah, I love that. I love thinking about it as collaboration, like, and as you said, it's just like tools for collaboration. And I'm super curious about, like, you know, you mentioned, We need different skill sets, right? When we're thinking about how to do that collaboration. So like, you know, in your time doing this and in working with scientists that are using the data, like, you know, like what are those skill sets? What do they look like? Like, what do people need to be able to do in order to like really tap into the possibility of all of this open data?

Saskia:

mean, it, it runs the gamut. Right. Um, some of them are, are really technical. It's, it's funny. Cause like, I'm now in a position where I work with a lot of software engineers. Um, like I am, I'm in our scientific computing team, um, and I am not a software engineer, right? Like my, you know, my ability to use GitHub is very minimal. Like I use it, don't get me wrong, but like, I'm not good at it. Like if something gets out of sync, I'm just like somebody else come fix it. Um,

Cat:

Every software engineer I interview starts the interview with the same thing. I'm not as good as everybody else. Just saying you might be a developer in some rooms.

Saskia:

no, that's a fair point. I work with people who are a lot better at it than I am. Some of it becomes technical, um, I mean, the biggest thing is, like, communicating. Um, and the, the, the tools of communicating, um, look different, right? For, for people with different, um, Backgrounds and different expertise. Um, and, uh, I don't know, we can come back to kind of code as an example. There's some, some code that I can read and make sense of, and some code that I can't make sense of, and if you don't have documentation that goes with it, I'm never going to figure out what that, that software does. Um, So how we document our code, how, like, how we build the examples around it, and when we build tools that make it so that you don't have to understand that, right? I have someone on my team who's working with language models so that, Scientists don't have to learn how to do, um, MongoDB queries or SQL queries to, um, find data through all of our metadata, um, and can ask questions just through a sentence. Um, and there's challenges to that. It's not, uh, like a trivial thing to do, but like the more we can remove that barrier of having to, like, learn yet another, um, language or tool or package, um, to be able to work with these, these things, um, I think that the better that is,

Ashley:

Yeah, I love that, I love that at the end of the day, it's like, it's communication. And that's just what we need to get better at. And like, whether that's in the form of writing metadata, that's detailed enough. So someone knows what you did in the experiment or documenting your code. Like these are all just forms of communication. That is the bedrock of collaboration. Right. It doesn't happen without that. I love that. Yeah.

Cat:

that not everybody can do everything is such a powerful one. And also maybe like seeing your place in science as part of the market forces, you know, the beginning of this, you were talking about not wanting your PI's job of running a small business, but I hear a lot of macroeconomic commentary and, and, um, I was, I was going to say. It was funny because I, I founded a startup and founded a research lab, you know, in sequence and I actually felt like there were really interesting skills that I learned from, you know, getting money in both places for very different things. I think that's a really interesting call, because sometimes as a scientist, you get trained, maybe to kind of think you're sort of isolated from society and you're just in your lab alone and you do everything from beginning to end. You know, and that's like what makes you a great scientist. Um, and you're just painting a picture for me of like a very different reality.

Saskia:

I kind of think I did think that way. And you know, my, um, my father was a history professor, um, and very much sat in his office and wrote papers and wrote books and, um, you know, did a lot of, a lot of really interesting stuff actually. But like, it really was just like, and I would, I've kind of You know, uh, teased him a little bit, um, where I'm just like, well, you know, why are, why these questions? Right? Like, you know, what's the benefit of asking these questions? It's like, he never gets asked that as a historian, right? You don't have to justify why you're asking about, um, the price of bread in modern Dutch economy. Um, you know, like, I'm sure somebody at some point asked him that, but he never had to justify it, right? He never had to go to the NIH and be like, give me money to do this in the way that we have to justify every experiment that we do, um, which is not necessarily a bad thing, right? Like we should think. carefully about why we're asking the questions we're asking, why we're doing the experiments we're doing. But in the sciences, it's like, you're constantly justifying the things that you're doing. And I kind of just want to, like, I just want to understand how we see the world, right? Like, I just want to, like, sit in a lab and do experiments and get excited and, like, have these little aha moments. Um, and, It was one of those things where it's like, you just don't get to do it that way. You don't, it's, it's really hard to do science in a way that's just about, um, that pure exploration. Um, it, it, you always are need the funding and you need the justification. Um, and so I think that was a little bit part of it is like, um, it is always part of a bigger effort and figuring out how to better fit it into that bigger effort, I think was. Like, one of the things that pushed me in this way.

Ashley:

I think it's really interesting point. I feel like, you know, writing and thinking through that justification as like a single person is, is probably easier than doing it on a team. But the benefits of doing it on a team are that then you get all the input in and you're all bought in, you know, hopefully on like the end result and the end thing you're striving for. So like, you know, I know this is something that the Allen does, right? Because you're, you're, you know, You're engaging in these like massive projects and you have multiple teams like on board. So like, you know, I guess like, what does that work and what do you see changing in people's kind of mindsets as they go through this process, moving from being these like single individual contributors to being a piece of the team? Like, what does that shift look like for people?

Saskia:

Um, yeah, it can look like a lot of different things. I mean, I think the thing that kind of comes first is seeing that the possibility is really open up really quickly because you can collaborate with people with different skill sets. And it's, It becomes a lot easier to add in, um, a rich layer of histology on top of your physiology, even though you're, you're terrible at histology. There's also challenges though, where, and I, I think you had kind of touched on this in earlier about like, you know, how do we do credit when we're collaborating like this? Right. Um, and that, that definitely, um, is a challenging piece of like, how do we actually recognize who's contributing in what ways? Um, And how do we, how do we, like, how do we communicate that to the broader world, um, because it ends up being important for people's careers, right? Um, and paper authorship isn't a great way of doing it. You can, um, put as many stars next to names as, as you want, but it really often comes down to just like that first author is the person who gets the most credit. Um, and I don't know that we've really solved this. I think it's something we continue to grapple with and, and think about and, um. And you know, internally, how do we recognize people's contributions? How do we reward people's contributions? And then how do we communicate them? Um, is, is kind of always an ongoing, uh, an ongoing thing. Um, I do think it's possible that some of the infrastructure for open science could help in this regard if papers stop being the only currency that we have. Um, and so, um, being able to give credit, say in our metadata, um, where, you know, now it's not just like how many papers do I have my name on, but how many data assets is my name attached to, um, that, uh, You know, and how many, where do, where do, what happens with those data assets, like, maybe that becomes another piece,

Cat:

There's so many parallel conversations that I've heard, you know, that I'm, I think about in open source software, you know, there's so much conversation about what contributions get recognized, even things like, um, who manages a project, even if they didn't perhaps write the lines of code, but they actually do a ton of work to facilitate, you know, the interactions between people that made something important happen. And then that becomes kind of like. in the cultural history of something, but not always in the credit, the explicit credit. And, um, I think protecting that credit and kind of fighting for it, even for our, you know, friends and colleagues who do that kind of work really matters a lot. And, um, I also think about things like, uh, you know, a PI in my grad school told me like, you know, you don't cite anything like a software package or anything like that. You don't put that in your paper references. Like that was. That was just what they thought back then. Um, and then I started a lab and we started working with software developers. And I was like, I have to cite all of these developers who wrote the stats package that I used, you know, and I almost felt this shame, this sadness that I had like not seen them as people and kind of been trained to not even see it as anything but a tool. Um, and it just happened to be free. Like, I don't know why it was free. It was just was, you know,

Saskia:

Yeah,

Cat:

so I went on that journey myself,

Saskia:

Even at the point where you do cite these things, there's also this weird thing where, um, I, I can cite a data set, I can cite a software package, and then I can cite this random paper that I mentioned in one line of my paper, right, but ultimately my paper is built off of the software and the data that I've got from these other two citations, but they get just as much credit as this one idea, like, you know, like I cite Hubel and Wiesel in every paper I ever write, like, They don't need any more credit for anything, right? It's just like we have to, right? It's like we're putting an electrode into a brain in the visual system. I have to cite Hubel and Wiesel, but like, intellectually it's doing almost nothing,

Cat:

I'm imagining this like, you know, you have your like acknowledgements citations and you're like, okay, but these are my OGs. Like this is my, these are my like real ride or die citations that really shaped me. And these are the ones I have to mention. It feels like we'd get into a lot of drama if we started waiting citations as well.

Ashley:

At the Allen Institute, there are teams of software engineers that build a lot of the tools and developers. And I'm sort of curious, like, you know, in this thinking about their involvement in the scientific process, like, what does that look like at the Allen,

Saskia:

They're often involved in the science in the, when it comes down to, okay, these are the experiments we're going to do, what does that look like, what does, what are the, what are the requirements that that puts on, um, Um, Our data. What are the systems that exist? What systems need to be built? I work on the team with a lot of software engineers and, you know, we do try and get them to engage with scientists a lot because I think it helps them to understand, um, kind of why, why we're building the tools we're building when they can see how scientists are using them. Um, and so bringing them into, um, scientific talks. Um, and so. Both kind of like, uh, you know, we've got kind of seminar series and things like that, but it's also, we'll sometimes have scientists just come talk to the software engineers, give a, a slightly more lay version of, um, an explanation of the work that you're doing. Um, and I often, when I have like new people on my team, we'll kind of set up a bunch of these one on one conversations so that they understand the context or they just maybe not understand, but they get to see a picture of that context, um, quickly. So I think that's often, but It's, it's rare that, that software engineers are, kind of, Hey, here's what the next idea that we should try. Um, but they're not too far from those conversations. Um, uh, and, and then it also is kind of a personality thing, right? Like some software engineers are more excited about that and maybe sit in on more of those conversations.

Ashley:

I love that that's like just also another piece of communication that the scientists then also learn to need, need to learn how to do, right. It's like, how do you break down the science, bring in the people who are going to build out the technical backend to make this happen and sort of like inform them about enough so they can appreciate what the goals are and they know then, you know, what to build at the end of the day. Yeah.

Saskia:

I think having been on kind of both sides of it, like, You know, I think I, and I, and I see this in other scientists had this idea of just like, well, they can build anything for me, right? Like they just, you know, I just have to tell them what I'm doing and they're going to make it all happen. And, and it's just like, oh, that's, that's just not true. I, or maybe it is true, but it just, the, the time, the, the time horizon of it is, is longer than what we can afford kind of thing. Um, so being able to understand kind of. The, the needs and limitations on both sides,

Cat:

I get it on the other side because developers are like, well, you're a scientist. So like, can you just please tell me how to be happy at work and how to fix my company? And, you know, and so we both think the other side is magical in a way that I think we could lean into and say, well, it comes from love, you know, but like, let's, let's find a way to connect.

Ashley:

Everyone's like, just fix it already.

Cat:

Just give me the answer. Give me the thing.

Saskia:

One of the, like, leads on, on the technology side used to say, and it became so dear to me, she used to, whenever people would say, you just do this, just do that, she's like, just is a four letter word. I want this like embroidered on my, like on the wall of my office because, um, it, that was like an eye opening thing for me to learn is like, we, yeah, we, we do just think you can just do this, just do that.

Ashley:

All this stuff takes, takes effort. It takes time. It takes communication. You know, we have a lot of listeners who are in tech, um, maybe have never done histology or physiology or touched a mouse or a microscope. But you are creating all of these databases and lots of tools and probably need, even on the open science and the open software development side of things. So I don't know, for the, for the curious listener, who's like, how do I do a little bit of neuroscience or help out in some way, what would you tell them?

Saskia:

There is so much open neuroscience that people can tap into. I think the challenge is finding it, um, um, finding, finding what it is you can do with in that space. Right. Um, there's a lot of, Open data. Um, there's like a variety of different repositories based off of different data modalities. So if you're interested in physiology and behavior data, there's a repository called Dandy, which is supported by the brain initiative that, uh, is where a lot of, uh, animal physiology and behavior. There's also MRI for humans and open neuro. Um, there's a lot of places where you can find data. Um, but I think it, you know, that's one thing, like you can dive into data and think about analysis, but you might need some hooks that help you get into it and think about what types of analysis are interesting, um, to do. I think the other place, though, is, uh, on, on kind of the software and tooling side. Um, And I mean, I think that might take more sleuthing, but I can tell you there's a lot of, um, open source software that gets used to process data and analyze data, um, where a lot of it's being developed by neuroscientists who, um, you know, find themselves having to figure out how to deal with some type of data that they, you know, I've collected all of these, um, these signals out of the brain, and now I want to sort. individual units out of all of these signals. Um, and you know, as a field, we've, we've developed a number of, of great tools, but I don't know that we're done. Um, and I, I think one of the things that a lot of these projects can benefit from is people who are better, uh, software engineers, um, making them just better tools that are easier to use, um, in, in that regard. Um, but then also people with like, um, You know, engineers with like signal processing skills who maybe don't care about the context of where the signals come from, but can like hone in on algorithms of, of detection and stuff like that. So there's a lot of open source, um, tools that the field largely relies on that I think could, like, would, would welcome people to engage with them and, and be able to, to work on them. So, um, but like I said, it's, it's a little bit about finding it. And some of that is, um, You have to kind of tap into some part of the conversation, be it the literature or, uh, the social media, um, or I don't know, like talking to some, some scientists face to face, uh, but at some point you have to get into that conversation

Cat:

People are feeling down about science right now. Like science is a scary thing to do out in the open for some of us, you know, I work on social topics It's tense in tech. And so I think you're in a really interesting place. I think the Allen to me sometimes is like this beacon of hope a little bit like, Hey, we can actually have big ambitious projects and a big institution that occupies this different part of society, you know, bridges gaps. And, um, yeah, just throw this general question out to you. Like, what advice do you give to us, um, about maintaining hope, believing in science, supporting science right now?

Saskia:

I don't know that I'm the right person to answer this question. I don't know that I, um, I guess I maybe will come back to, like, Some of the stuff we've been saying, like, for me, hope is, always comes down to other people, um, and I, that's the only place I've ever been able to find hope, um, and maybe this is why I, I gravitate to team science in the way that I do, um, so, my hope in science is only because there are other scientists here, um, that are, you know, you know, are in it with me, um, and who share kind of this desire to make it more collaborative and more open and, um, and, and more inclusive. Um, yeah. At the same time, I'm also very scared. I don't know what our future looks like. I think there's a lot of things happening that could have a really big impact on everything that we're doing. And so I get very nervous, but I at least know that I'm not facing that alone.

Ashley:

Oh, I love that.

Cat:

You're not alone. We're with you.

Ashley:

I think that's like a that's a beautiful note to end on actually like hope is other people

People on this episode