Fishy pasta sauce

We are fortunate to be sheltering in place with an old friend and Clark Way Harrison visiting professor from Italy, so instead of adding to the excellent advice about Zoom and other tricks of distance teaching, I will be sharing recipes and philosophies of cooking. Patrizia d’Ettorre is from Abruzzo, from a small town,right on the sea. She went to university in Parma, has lived in Denmark and now is a professor in Paris. Her cooking is expert and simple. Her recipes bring out the best from excellent ingredients, of the best we have here.

Last night we had a fabulous fresh trout, obtained in a pre-ordered bundle from Daniel Roth at the University City farmer’s market. With the cup or so of remaining cooked trout she made this sauce. It could be made with any fish. It could have tomato sauce added, also. But I like its simplicity. Italian food is as much about aroma as about taste.

I prefer to cook in weights. Patrizia mostly does it by feel. I will try to put volume equivalents, but with this dish, it doesn’t matter. You could double any ingredient.

Trout pasta sauce with black olives

180g (about a cup) left over fish, or an 8 oz can of tuna, mackerel, or salmon

30 g (quarter cup) chopped black olives. Or capers. I thought this would overwhelm the fish, but it didn’t.

1 shallot or a quarter of a small onion, maybe 2 tablespoons chopped

juice of one lemon

1 tablespoon chopped parsley

454 g (one pound or one usual box) pasta. We used spaghetti.

olive oil

To cook:

Bring a pot of water to boil. Salt the water with about a tablespoon of Kosher salt.

Sauté the shallot lightly in 2 tablespoons of olive oil (we use the current harvest of Tuscan oil we get at Costco.)

Add the fish and the olives and sauté for a couple more minutes on low.

Cook the pasta to just before al dente. I usually do 2 minutes less than the box says.

Drain the pasta, saving a cup of the cooking water in case you need it to moisten the prepared pasta.

Add the lemon juice to the fish mixture and mix into the pasta. Sprinkle parsley on top. Add a little olive oil and some of the cooking water if necessary to moisten it to your liking.

Enjoy, and remember Italians never put cheese of any kind on pasta with fish.

Posted in recipes | Tagged , , | 1 Comment

Best updates on the new corona virus (#coronavirus, #covid19, #SARSCoV2) are on Twitter: a brief tutorial

Want to see a phylogeny of #covid-19? Follow @nextstrain. Want to hear what a Harvard epidemiologist thinks? Follow @mlipsitch.  Want a thoughtful microbiologist’s perspective? Follow @RELenski. What to know the latest advice from the World Health Organization? Follow @WHO. Want more general advice? Search various hashtags in Twitter, like #coronaoutbreak, #coronavirus, #SARSC0V2, or #covid19.

There are lots of other scientists, news reporters, and health experts posting information that goes far past 2 weeks of food and hand washing. It shows up first on Twitter. I’ll leave it to you to choose whom you would like to follow.

The truth is that for some time Twitter has been where I hear scientific news first. It is quick, interesting, and timely. How can that be when so many disparage Twitter? It is all about whom you follow. I keep Twitter for science, the environment, education, and the like.  Facebook is for personal stuff, more banal.

So load the Twitter app onto your phone and start following people that post cool science. Start with the four I mentioned and then it is easy to follow others from those these list. You don’t have to post anything yourself if you don’t want to.

Posted in Public Communication, Scientific news | Tagged , , | Leave a comment

One important way to be fair in grad school interviews

Today is the day! We are interviewing possible graduate students! They have made a huge cut and have been invited to campus. Which ones will we commit to for 5 or more years? Which ones will join our labs, have their lives changed by our research and our culture? How can we choose the best students? One thing that is crucial is to be fair. Do not look for your younger twin. Treat everyone equally. One way to do this is to ask the same questions of everyone. Don’t find someone who shares your arcane hobby, or has been where you have been. These things will exclude diversity, often, and don’t matter for research.

Here are the questions I will use this year:

1. Can you tell me about any research or independent study that you did as an

Jennie Kudzdal-Fick, Ph.D. and glory!

undergraduate?

2. Tell me about your favorite undergraduate classes and what made them so great?

3. What are you interested in exploring in graduate school?

4. What kinds of things excite you the most about research?

5. What kinds of techniques have you learned; which ones do you like; which ones do you find challenging?

6. Can you tell me about a time when you were a teacher or a mentor?

7. Tell me about an article you read recently that seemed really interesting?

8. How might you thrive in this department?

In other years I have used others. Actually, I have written a lot on choosing a graduate school, on how to impress us, here. On how to figure out self starters, here. But probably best to dig through my posts searching for “grad student” and learning how to avoid both zombie and vampire professors.

Good luck! It will be a fabulous few years!

Posted in Uncategorized | Leave a comment

The ethical treatment of data: six essentials

How many willets?

Science is founded on a lovely relationship between theory and data. Theory predicts patterns. Data tells us which theories work. Together they make sense of our world. How do we teach students to handle data properly? Six general areas come to mind.

Easiest and most obvious is not to cheat in any way. If the pH meter reads 7.10, that is what you write down. It would be wrong to write down even 7.14. If the mouse moves right, you cannot say it moved left. If there is a band on the gel at a certain position, record it. And on and on. Report what you measure. This is actually the easiest essential, and the worst to break. Do not agonize too much about teaching this one, even considering recent events, because it is the easiest. Would it be too much to say that those that break this one are different from you and me?

The second essential is similar. Do not let others convince you to change your data. If someone else, even in your own lab, suggests that you did not see what you saw, or measure what you measured, do not change it. The ethical treatment of collaborators is a huge topic of its own, to be treated in a separate entry, but no one should ever try to convince you to change your data. This does not mean you never redo measures if for example the pH meter was off last week, but don’t redo it because the result didn’t support your pet theory.

The third essential is to understand your own biases and how they impact data collection. Even if you are trying your best, you might inadvertently bias your data to the direction you think it should go. We are all vulnerable to this. This is not fraud like point one. So, whenever possible conduct your study blind. This does not mean you shut your eyes. It means the person scoring the data is ignorant of the impact of a given outcome. If some males were injected with extra testosterone and then observed to see if it made them more aggressive, the person observing the behavior should not know which birds were which, for example.

The fourth essential is to analyze your data properly. Use appropriate statistics. Understand random and fixed variables. Use parametric statistics only when the assumptions are met. But this is only the beginning. In our genomic analyses there are all kinds of complexities to worry about. Your data form patterns only with correct statistics.

The fifth essential is that your data and it analysis should actually show what you say it shows. It is surprising how often people get this one wrong. Consider what your data show and do not discuss things they did not show. You might have wanted to study the other topic, but you did not.

The last point is to make your data and your analyses public. This should be entirely possible for data and is increasingly possible for analyses. Someone else should truly be able to replicate what you did with your data and come to the same result.

I’m sure there are lots of other important cautions on data. But these six categories seem to me to cover the most crucial areas. Don’t cheat. Don’t listen to others who want you to cheat. Be aware of inadvertent bias. Use the right statistics. Don’t overextend your results. Make your data and your analyses public. And of course, have fun, for you will be on the path of discovering new truths!

 

Posted in Uncategorized | 1 Comment

Trust your collaborators?

How many wasps are on this nest? What are their unique identifying marks? How many eggs, larvae, and pupae are in the nest? How many times does a given wasp dominate another? These are the questions that gave the numbers for my earliest graduate school research projects. I worried about every single one of these numbers. Is there a wasp hiding on the back of the nest? Have I recognized the painted marks correctly? Did I miss a young larva, calling it an egg (actually unlikely because eggs are pearly white and young larvae, though tiny, turn pinkish)? Was that really a domination, or just a wasp climbing on another as she flew off the nest? I worried about my data and did my best.

We learn what blind means in science. If there is an experiment, we make sure when we are scoring resulting actions we do not know which treatment a case received. There are lots of ways to do this. But the point is we don’t really even trust ourselves to be unbiased because we could inadvertently favor a hypothesis. How to avoid bias is something worth spending time on.

We keep careful data notebooks. In my lab these are still mostly on paper. The pages are numbered and dated. But increasingly data are collected directly onto loggers of various sorts. There we also preserve the details of data collection.

Once we have collected our data, we examine it for obvious errors. We graph it and look hard at the outliers. Are they real, or was there a data entry or other kind of mistake? Often we enter data twice and then compare as a way of checking that step. Of course, if the outliers are real, we keep them.

But what about our collaborators? What if they have not been as careful as we have? How can we tell that we have a sloppy or fraudulent collaborator? What checks should we do? These questions are timely because of the Jonathan Pruitt case, where collaborators who trusted his data and trusted him are now retracting papers. I am not going to summarize that case here but here are links to what Kate Laskowski, Dan Bolnick, and Science have said. Perhaps a link to the Dynamic Ecology blog  and my own previous post are also warranted.

I know you want an answer. Perhaps a great R package to run your collaborators through. Or a tutorial from Elizabeth Bik on how to recognize fraud in images of biological samples or gels. Maybe you want a personality test, or to learn of the traits common to those who cheat.

I have to disappoint you. I have racked my brain for what we might do but for anything I thought of I ran across two stumbling blocks. One was wondering what else the collaborators in the Pruitt case might have done. The other was thinking of my own collaborators and how I might behave differently in the future.

My conclusion was that, no, we cannot be constantly checking our collaborators’ data. No, we cannot ask that they show us their raw data. No, we cannot identify a flawed personality type that cheats. We are stuck. These techniques might work occasionally, but basically they will not work. They did not break open the Pruitt case. That, apparently, was thanks to an internal whistle blower with inside knowledge of the problem (see previous links).

Or are we stuck when it comes to our collaborators? I think there are only two possible solutions. The first is to stop collaborating. Collect all your data yourself. Then you will be sure of its accuracy. But what would that do to science? How greatly that would slow down progress?

What is the other solution? It is easy, but flawed. But it is the best option, by far.  It will not avoid any of the pain the Pruitt collaborators are currently suffering. But it is best for science. It is to simply trust your collaborators.

Odds are they are trustworthy. In most cases they are either replicating something you are doing but somewhere else, as with the big ecology experiments where plant communities are studied in similar ways all over the globe. Or they are providing an expertise in something you are not skilled in and have no hope of ever learning on top of everything else you do. In neither of these cases can you check their data in any meaningful way.

And no, trustworthiness does not increase with the number of beers you have shared with a collaborator. Cooperation, new ideas, and scientific fun may increase, but not necessarily good data.

For some people some of those collaborators will provide flawed data.  Should we limit the potential impact of this possibility by not collaborating too much with any one person? Here again, I would say no. I have spent much of my career collaborating mostly with one person and with a lot of others in addition. I know of other very productive long-term completely trustworthy collaborations.

So I suggest a simple personal solution. Trust your collaborators. But what does that mean for science when someone turns out to be fraudulent?  Where is the protection there? Here again I have an answer. It is that no important idea should be validated by work from just one lab or one set of collaborators.

We should think hard about this and keep track of what truths have come from just one group. If they are close to your field, spend some time redoing, or doing similar experiments so that science progresses on a firm foundation. And of course, remember to be absolutely trustworthy yourself. I hope someone will let us know what to now believe about spider social personalities.

Posted in Collaboration, Data and analysis, Ethics | 15 Comments

A tragedy in animal behavior and heroic responses

As I write this 8 papers where data were collected by Jonathan Pruitt are somewhere in the pipeline for retraction and another 5 have been identified with data problems. Many others are being checked. Yet other papers, mostly with data collected by others and Pruitt as an author, have been cleared. This is a tragedy of many dimensions. If you have not heard about it, look at what Kate Laskowski and Dan Bolnick have written. Dan’s piece provides a link to a Google Sheet where you can follow the story article by article.

I say this is a tragedy and now I will talk about the victims. The first victim is science. How can we go forward with trying to understand our world and its players if we cannot trust the evidence behind what we read? What if people that disagree with us simply say our data cannot be trusted? What if no data can be trusted? Then what we do is no longer science. Science seems to be taking a bigger and bigger hit these days as the wonderful detective work of Elizabeth Bik also shows. Check out the images she posts on Twitter for fraudulent duplication.

If science is the first victim, scientists are the second victim. The scientists who are hurt the most are those who were collaborators with Jonathan Pruitt. They have seen years of work go down the drain. They fear for their own reputation. They mourn for the really cool ideas they thought were true. And they still might be, but evidence for that needs to be collected anew.

We need to support these collaborators in whatever way we can. They chose a path through this morass that will make science better ultimately, at some personal risk. Kate Laskowski’s brilliant piece is a model of great writing and ethical science. This is not the kind of hero she ever wanted to be, but heroics come from what we do with situations that are thrust upon us. We will all be grateful to Kate as we examine our own procedures for data purity.

I will not say Jonathan Pruitt is a victim, but he is part of the tragedy. Will we ever really know what motivated him? I decline to guess. He burst on the animal behavior scene with his first paper in 2008 and immediately began publishing at such a prolific rate that in another year or two he would have overtaken my own 41 year career in numbers of publications. This output got him a lot of academic success leading to his current position (current as I write anyway) of Canada 150 chair at McMaster University.

What Jonathan Pruitt produced was so far beyond average, it is hard to believe anyone would feel pushed to that level.  But others feel pressure to produce in academia. Perhaps there are ways we can diminish that. For example, I really loved it when the US National Science Foundation (NSF) started allowing only 10 papers on a Biographical Sketch. Reward great work, not much work.

In following posts I will talk about what I originally meant to write about: how to trust data from a collaborator. I will also write about best practices and how to teach them. I know we are all taking a moment of gratitude to our honest collaborators, a reflection on our own data collection, and sadness for this tragedy.

Posted in Collaboration, Ethics | Tagged , | 5 Comments

Do this before you say yes to anything

You just got asked to do something new. It might be to join an on-campus committee, or to talk at a fancy university. It might be a one-off request to review a paper, write a letter of recommendation, or serve on a National Science Foundation panel. How do you decide whether to say yes?

Your first response might be to say why yes, I would love to do that. Thank you so much for thinking of me and making me feel that I’m part of this great process of making science and scientists better in the nation and the world. Surely I can fit your very valid request into my day.  And you say yes, and yes, and yes again.

And then you wonder where your time went. You try to count the hours you spend in a day thinking about ideas and find that it is not hours but minutes. Your exercise schedule diminishes. You no longer enjoy chopping the first onion of the evening for dinner. You turn back to the computer after the kids are in bed, or even before.

Do not let this happen to you! Reserve time for research and your family above all else. And of course there will be the commitments of teaching. So how do you find a balance? I have two recommendations.

The first is to stop seeing your life as an empty field onto which you can paint new blooms. See your life instead as a packed prairie where every plant has fought to get there. If you add something, you must take something away. Make yourself name it. Have a chart if that helps. And no, deciding you will magically work harder or more efficiently is not an answer. Are you going to take away sleep? Not a good idea.

It is true you will gradually become more efficient and be able to do more than before. I spent a week writing the first tenure recommendation for someone decades ago. Now I can do it in 3 hours, sometimes less. But this doesn’t happen just because you suddenly got asked to do something new.

Many new things are well worth doing. Serving on an NSF panel early in your career can be insightful. I remember a time when I wondered why I was not on the editorial board of any journal and so was very glad when I was finally asked. I also remember the first time I was asked to give a talk at another university. But as your career advances these and all the rest start piling in. It all glitters, but don’t pick it all up. So how do you decide?

A mentor can be a friend and peer. Me and my college roommate, Nancy Scheer

My second main piece of advice is that you cannot decide alone. You should have a mentor, or even a small committee of mentors that can help. They will know all your other obligations. You can talk it through with them. They can help you see what is in your best interest, help you see how to balance giving back to the community, local and international, with continuing your own career. After all, whoever asked you to do something has no idea what else you have to do, or what your life looks like right now. I once asked the late Ilke Hanski to handle a manuscript for PNAS and he wrote me back that he was busy helping his lab group before dying and could not do it. I still feel bad that I even took enough of his time to get that answer. But your mentor will know. Listen to her because she will listen to you and help you decide.

A couple of mentors might be better because they will have different perspectives. So, as your career grows remember that your day is already full. Some things are still worth adding as others are subtracted. The trick is to figure out what. I am still grateful to my mentors for helping me find balance and forgiving me for saying no, and also for saying yes.

 

Posted in Managing an academic career | Tagged , | Leave a comment