An easy productivity tip: don’t stop at a stopping place

You know the feeling. Four ideas are juggling in your brain and you need to get them down on paper. They shift around as you struggle for the best order, put in transitions, and write your paragraphs. After all, you want your reader to experience the material just as you did and this is hard. But at last you are done. Your ideas are pinned down and your brain can relax. You stretch, look at your empty coffee cup and prepare to get up.

Don’t! This stopping place exactly the worst time to take a break. Instead, keep going. See what comes next and write down a couple of sentences, some thoughts, or even just a few notes. After all, you know what will come next since you were just now so enmeshed in what came before. It can be the work of a few moments to get these thoughts down.

Then, when you return to work after your break, you will be able to dig right in without recalibrating. You will see where you are, in the middle of something. The activation energy is not nearly so great as it would be if you had actually stopped at a break in the material.

Another danger of stopping at a stopping place is that you might pick up something else instead of continuing work on this project. You might have teaching to prepare, or take the time for a review or a letter of recommendation. These are worthy tasks, but they should be left for their own time, perhaps late in the day when your own writing on your project is done.

The wisdom of never stopping at a stopping place applies to all projects, big and small. As long as the project is not complete, work a bit into the next section before you stop. Writing a book? Towards the end of it when you might be more involved with editing and publicizing, write a few pages on your next book. Writing the discussion of a paper? Begin the next section before a break.

Whatever you do, make sure that beginning to work again is as easy as possible and that happens when you are still in the middle of an idea, not at its end.

Posted in Writing | Tagged , , | Leave a comment

Please join the American Association for the Advancement of Science for secret reason in addition to the main one

Congratulations to all the new fellows of AAAS, the American Association for the Advancement of Science! Those from Wash U were honored with nice write-ups in the Wash U Record. Here is a list of all of them. It is a wonderful thing to receive this honor, this recognition of decades, or at least a decade, of hard work in your scientific discipline!

In these troubled times, there are many of us that love science, that love what it brings us, perhaps right now most of all a vaccine developed in an entirely new way. So let’s celebrate these new fellows.

It is also important to remember that knowledge does not exist on its own. It must be disseminated. We scientists are best at telling our stories along with their evidence to other scientists. But just as important is telling the stories to the general public. Just as important is telling how we know what is true and what is not.

Besides telling our discoveries to other scientists and to the public, there is also an important role for interpreting those discoveries into information on policy. If we have a goal of saving a species, of feeding more people, of achieving happiness for the most people, if we have any goal at all, science is likely to inform how we get there. Because of this opinions based on science need to reach policy makers.

The American Association for the Advancement of Science, and its flagship journal, Science, do these three things, scientific communication, public communication, and policy communication. This is our society and our country. We need to support it, now perhaps more than ever.

But there is another, perhaps more self-centered reason to belong to AAAS and support it. It is that you cannot ever be elected as a fellow unless you are a member. And not just a member for a year or two, but for at least four years! So if you are feeling a little like your name should have been included among those that just got elected as fellows, consider whether or not you are a member. Is is not the only reason by any means, but I bet you didn’t know this.

So join the AAAS!

Posted in Uncategorized | Tagged , , | Leave a comment

In on-line teaching one thing is essential

Of all the preparations with Canvas and Zoom, in all the discussions with my Teaching Assistants, in reading the blizzard of emails my anxious university sends out, there is one thing I keep remembering from my son’s long experience with on-line learning. After all, remote teaching is new to me and new to nearly all the many teachers across the world in this pandemic, but it is not new to experienced on-line teachers who have been offering remote classes with excellence from top universities like Oregon State.

It is something we can do that is quite easy and yet is transformative for the student. It can be applied to Zoom meetings, to Discussion questions in Canvas, and to all the very necessary interactive aspects of a course. It is based on fundamental principles of human interaction.

The power of learning in small groups

Doing this will enrich a student’s experience. It will make her feel that she is in an intimate classroom with colleagues in situations where she can build enduring relationships. Actually, it is something that can and should be done for in-person classes, though the traditional fixed seats all pointed to the fancy man lecturing up front made it hard.

What is this trick, this teaching magic I have been building up to in a slightly annoying way? It is simply to engineer the student’s experience into smaller, more personal groups that have repeated interactions. When I taught in person, I had students at tables of six, with activities either among the six, or among a team of three. The three worked intensely together polishing their Wikipedia writing. At most I combined two tables of six into a group of twelve that prepared to teach high school students in a workshop.

How will this work for remote teaching? Put the students into small groups whenever possible. After students grapple with the material by reading or listening to recorded lectures (I’m doing mine as podcasts), bring them into smaller groups to discuss. In Canvas you can set groups so for Discussion questions they only see a subset of the class to comment on. I divided my 46 person class into quarters, called Orbs, Wolves, Widows, and Jumpers because we are writing about spiders this year. When they write on the Discussion questions I’m using instead of tests, they will comment on a few others, seeing only others in their group of 12 or so. Since these assignments are weekly, they should get to know the thinking of their groupmates quite well by the end of the semester.

I have put the students into groups of just three for their Wikipedia writing, on spiders, of course. One will be a fact checker, one a writing expert, and one a Wikipedia guru and they will read each other’s work often. They will also exchange for comments with another group of three, which would have been seated at their same table had we been in person.

Nearly all the work will be asynchronous, but we will hold a Zoom meeting at the scheduled class time. It will be a chance to come together as a group and ask questions, but most of the time will be in the smaller groups chosen from within Orbs, Wolves, Widows, or Jumpers. These smaller groups will work through study questions, prepare for writing their Discussion questions for Canvas, and work on their Wikipedia entries.

By the end of the semester I hope the students will have found life-long friends within their groups of six. Back when one could, the groups of six often went out together and indeed became friends.

The ties that form in smaller groups make learning fun. They make it easier for students to reveal when they do not understand difficult material and work through it with their friends. They make a strange learning experience familiar through human bonds. Give it a try!


Posted in Teaching | Tagged , , , , , | Leave a comment

Why we fail at hiring Black faculty in biology departments

Today was a day of reflection. How have we gotten to this sorry point in the history of the US? What have I personally contributed to the problem? What have I personally contributed to the solution? How can we do better?

Know that we all breathe the same air, that problems that impact the Black community first, hit the rest of us. Know that much of our problems of social justice, of fairness, of understanding what our rights are, fail because of the legacy of slavery, and the despicable prejudice against Black people that still permeates our society.

What can I do about it? First, understand my own power. Second, understand what has happened. There is no answer short of radically changing how we do things.

I learn from reading. I may never understand what it is to be Black in America, but I can read and empathize. I put  African-American interests on my Book Bub readings, so I can get books suggested daily. Three books I have read recently that moved me are Jesmyn Ward’s Men We Reaped, my friend, Rafia Zafar’s Recipes for Respect, and Isabel Wilkerson’s The Warmth of Other Suns. Each powerful in its own way, they help me understand the experiences of others. Read these books and keep reading as I will. Understanding comes slowly. There are hundreds of books worth reading.

I fear we will read these and other books. We will march, as I did last week, mask cinched tight. We will put Black Lives Matter signs in our yards. We will be among the righteous.

And then we will go to our faculty meetings. We will evaluate job candidates and imagine we are being fair. After all, don’t the same standards apply to every one? Can we not count publications and grants and evenly choose the person with the most shiny beads in the currency we have chosen to count? Does not everyone have an equal chance at assembling their little pile of shiny beads? Would it be fair to look at anything except those beads?

I have now been a tenured or tenure track professor for 40 years at two institutions. I have sat through many hiring decisions. I know all about those shiny beads. I myself have lots of them, and the accolades that come with it. So I think I can speak with some authority on baubles like these beads. The first thing to know is that they are easy to count. The second thing to know is that they mean little.

We are not very good at predicting who will be an excellent professor (whatever that means) ten years from now. Actually, we are terrible at it. The remarkable Richard Tapia once ran an all-day workshop at which he discussed this. He mentioned that he was not the first choice of the search that hired him. He further mentioned, if I recall correctly, that one of Rice University‘s few Nobel Laureates, Richard Smalley, was not in the top two or three choices of the search that eventually hired him. When I think back over the years of the people we interviewed and the people we hired, I can say that we made some great choices, but we also let some great people go.

But one thing is sure. Counting those beads has not led us to a diverse professoriate. What if we quit doing it entirely and look instead at excellence, look at it directly in the eye? What if instead of beads we consider ideas? What if instead of beads we consider what a person will do for our program and our institution? What if instead of beads we look for promise? To do this we might actually have to read papers. We might actually have to look at the entire person. We might have to accept our inability to measure excellence and consider the importance of diversity. We might have to understand that a White man who has had the benefit of great mentoring and achieved many beads might not actually be better than someone else with fewer beads harder fought for with a lack of mentoring? Might the latter person, properly mentored, actually be the better hire, both for reasons of leadership, but also innovation, creativity, and new perspectives?

As a White woman I cannot contribute to the running Twitter feeds on inappropriate things said to Black professors. But I could give quite a list of inappropriate things said about Black candidates for jobs, or graduate school. Listen to yourselves.

We need to rebuild this thing from the bottom up. We have failed as departments in hiring the best people. We will keep failing as long as we hold onto our belief that we can pick the best candidates. We will keep failing as long as we love our shiny beads. We can do better. When?

Posted in diversity, Jobs, White male bias | Leave a comment

Fishy pasta sauce

We are fortunate to be sheltering in place with an old friend and Clark Way Harrison visiting professor from Italy, so instead of adding to the excellent advice about Zoom and other tricks of distance teaching, I will be sharing recipes and philosophies of cooking. Patrizia d’Ettorre is from Abruzzo, from a small town,right on the sea. She went to university in Parma, has lived in Denmark and now is a professor in Paris. Her cooking is expert and simple. Her recipes bring out the best from excellent ingredients, of the best we have here.

Last night we had a fabulous fresh trout, obtained in a pre-ordered bundle from Daniel Roth at the University City farmer’s market. With the cup or so of remaining cooked trout she made this sauce. It could be made with any fish. It could have tomato sauce added, also. But I like its simplicity. Italian food is as much about aroma as about taste.

I prefer to cook in weights. Patrizia mostly does it by feel. I will try to put volume equivalents, but with this dish, it doesn’t matter. You could double any ingredient.

Trout pasta sauce with black olives

180g (about a cup) left over fish, or an 8 oz can of tuna, mackerel, or salmon

30 g (quarter cup) chopped black olives. Or capers. I thought this would overwhelm the fish, but it didn’t.

1 shallot or a quarter of a small onion, maybe 2 tablespoons chopped

juice of one lemon

1 tablespoon chopped parsley

454 g (one pound or one usual box) pasta. We used spaghetti.

olive oil

To cook:

Bring a pot of water to boil. Salt the water with about a tablespoon of Kosher salt.

Sauté the shallot lightly in 2 tablespoons of olive oil (we use the current harvest of Tuscan oil we get at Costco.)

Add the fish and the olives and sauté for a couple more minutes on low.

Cook the pasta to just before al dente. I usually do 2 minutes less than the box says.

Drain the pasta, saving a cup of the cooking water in case you need it to moisten the prepared pasta.

Add the lemon juice to the fish mixture and mix into the pasta. Sprinkle parsley on top. Add a little olive oil and some of the cooking water if necessary to moisten it to your liking.

Enjoy, and remember Italians never put cheese of any kind on pasta with fish.

Posted in recipes | Tagged , , | 1 Comment

Best updates on the new corona virus (#coronavirus, #covid19, #SARSCoV2) are on Twitter: a brief tutorial

Want to see a phylogeny of #covid-19? Follow @nextstrain. Want to hear what a Harvard epidemiologist thinks? Follow @mlipsitch.  Want a thoughtful microbiologist’s perspective? Follow @RELenski. What to know the latest advice from the World Health Organization? Follow @WHO. Want more general advice? Search various hashtags in Twitter, like #coronaoutbreak, #coronavirus, #SARSC0V2, or #covid19.

There are lots of other scientists, news reporters, and health experts posting information that goes far past 2 weeks of food and hand washing. It shows up first on Twitter. I’ll leave it to you to choose whom you would like to follow.

The truth is that for some time Twitter has been where I hear scientific news first. It is quick, interesting, and timely. How can that be when so many disparage Twitter? It is all about whom you follow. I keep Twitter for science, the environment, education, and the like.  Facebook is for personal stuff, more banal.

So load the Twitter app onto your phone and start following people that post cool science. Start with the four I mentioned and then it is easy to follow others from those these list. You don’t have to post anything yourself if you don’t want to.

Posted in Public Communication, Scientific news | Tagged , , | Leave a comment

One important way to be fair in grad school interviews

Today is the day! We are interviewing possible graduate students! They have made a huge cut and have been invited to campus. Which ones will we commit to for 5 or more years? Which ones will join our labs, have their lives changed by our research and our culture? How can we choose the best students? One thing that is crucial is to be fair. Do not look for your younger twin. Treat everyone equally. One way to do this is to ask the same questions of everyone. Don’t find someone who shares your arcane hobby, or has been where you have been. These things will exclude diversity, often, and don’t matter for research.

Here are the questions I will use this year:

1. Can you tell me about any research or independent study that you did as an

Jennie Kudzdal-Fick, Ph.D. and glory!


2. Tell me about your favorite undergraduate classes and what made them so great?

3. What are you interested in exploring in graduate school?

4. What kinds of things excite you the most about research?

5. What kinds of techniques have you learned; which ones do you like; which ones do you find challenging?

6. Can you tell me about a time when you were a teacher or a mentor?

7. Tell me about an article you read recently that seemed really interesting?

8. How might you thrive in this department?

In other years I have used others. Actually, I have written a lot on choosing a graduate school, on how to impress us, here. On how to figure out self starters, here. But probably best to dig through my posts searching for “grad student” and learning how to avoid both zombie and vampire professors.

Good luck! It will be a fabulous few years!

Posted in Uncategorized | Leave a comment

The ethical treatment of data: six essentials

How many willets?

Science is founded on a lovely relationship between theory and data. Theory predicts patterns. Data tells us which theories work. Together they make sense of our world. How do we teach students to handle data properly? Six general areas come to mind.

Easiest and most obvious is not to cheat in any way. If the pH meter reads 7.10, that is what you write down. It would be wrong to write down even 7.14. If the mouse moves right, you cannot say it moved left. If there is a band on the gel at a certain position, record it. And on and on. Report what you measure. This is actually the easiest essential, and the worst to break. Do not agonize too much about teaching this one, even considering recent events, because it is the easiest. Would it be too much to say that those that break this one are different from you and me?

The second essential is similar. Do not let others convince you to change your data. If someone else, even in your own lab, suggests that you did not see what you saw, or measure what you measured, do not change it. The ethical treatment of collaborators is a huge topic of its own, to be treated in a separate entry, but no one should ever try to convince you to change your data. This does not mean you never redo measures if for example the pH meter was off last week, but don’t redo it because the result didn’t support your pet theory.

The third essential is to understand your own biases and how they impact data collection. Even if you are trying your best, you might inadvertently bias your data to the direction you think it should go. We are all vulnerable to this. This is not fraud like point one. So, whenever possible conduct your study blind. This does not mean you shut your eyes. It means the person scoring the data is ignorant of the impact of a given outcome. If some males were injected with extra testosterone and then observed to see if it made them more aggressive, the person observing the behavior should not know which birds were which, for example.

The fourth essential is to analyze your data properly. Use appropriate statistics. Understand random and fixed variables. Use parametric statistics only when the assumptions are met. But this is only the beginning. In our genomic analyses there are all kinds of complexities to worry about. Your data form patterns only with correct statistics.

The fifth essential is that your data and it analysis should actually show what you say it shows. It is surprising how often people get this one wrong. Consider what your data show and do not discuss things they did not show. You might have wanted to study the other topic, but you did not.

The last point is to make your data and your analyses public. This should be entirely possible for data and is increasingly possible for analyses. Someone else should truly be able to replicate what you did with your data and come to the same result.

I’m sure there are lots of other important cautions on data. But these six categories seem to me to cover the most crucial areas. Don’t cheat. Don’t listen to others who want you to cheat. Be aware of inadvertent bias. Use the right statistics. Don’t overextend your results. Make your data and your analyses public. And of course, have fun, for you will be on the path of discovering new truths!


Posted in Uncategorized | 1 Comment

Trust your collaborators?

How many wasps are on this nest? What are their unique identifying marks? How many eggs, larvae, and pupae are in the nest? How many times does a given wasp dominate another? These are the questions that gave the numbers for my earliest graduate school research projects. I worried about every single one of these numbers. Is there a wasp hiding on the back of the nest? Have I recognized the painted marks correctly? Did I miss a young larva, calling it an egg (actually unlikely because eggs are pearly white and young larvae, though tiny, turn pinkish)? Was that really a domination, or just a wasp climbing on another as she flew off the nest? I worried about my data and did my best.

We learn what blind means in science. If there is an experiment, we make sure when we are scoring resulting actions we do not know which treatment a case received. There are lots of ways to do this. But the point is we don’t really even trust ourselves to be unbiased because we could inadvertently favor a hypothesis. How to avoid bias is something worth spending time on.

We keep careful data notebooks. In my lab these are still mostly on paper. The pages are numbered and dated. But increasingly data are collected directly onto loggers of various sorts. There we also preserve the details of data collection.

Once we have collected our data, we examine it for obvious errors. We graph it and look hard at the outliers. Are they real, or was there a data entry or other kind of mistake? Often we enter data twice and then compare as a way of checking that step. Of course, if the outliers are real, we keep them.

But what about our collaborators? What if they have not been as careful as we have? How can we tell that we have a sloppy or fraudulent collaborator? What checks should we do? These questions are timely because of the Jonathan Pruitt case, where collaborators who trusted his data and trusted him are now retracting papers. I am not going to summarize that case here but here are links to what Kate Laskowski, Dan Bolnick, and Science have said. Perhaps a link to the Dynamic Ecology blog  and my own previous post are also warranted.

I know you want an answer. Perhaps a great R package to run your collaborators through. Or a tutorial from Elizabeth Bik on how to recognize fraud in images of biological samples or gels. Maybe you want a personality test, or to learn of the traits common to those who cheat.

I have to disappoint you. I have racked my brain for what we might do but for anything I thought of I ran across two stumbling blocks. One was wondering what else the collaborators in the Pruitt case might have done. The other was thinking of my own collaborators and how I might behave differently in the future.

My conclusion was that, no, we cannot be constantly checking our collaborators’ data. No, we cannot ask that they show us their raw data. No, we cannot identify a flawed personality type that cheats. We are stuck. These techniques might work occasionally, but basically they will not work. They did not break open the Pruitt case. That, apparently, was thanks to an internal whistle blower with inside knowledge of the problem (see previous links).

Or are we stuck when it comes to our collaborators? I think there are only two possible solutions. The first is to stop collaborating. Collect all your data yourself. Then you will be sure of its accuracy. But what would that do to science? How greatly that would slow down progress?

What is the other solution? It is easy, but flawed. But it is the best option, by far.  It will not avoid any of the pain the Pruitt collaborators are currently suffering. But it is best for science. It is to simply trust your collaborators.

Odds are they are trustworthy. In most cases they are either replicating something you are doing but somewhere else, as with the big ecology experiments where plant communities are studied in similar ways all over the globe. Or they are providing an expertise in something you are not skilled in and have no hope of ever learning on top of everything else you do. In neither of these cases can you check their data in any meaningful way.

And no, trustworthiness does not increase with the number of beers you have shared with a collaborator. Cooperation, new ideas, and scientific fun may increase, but not necessarily good data.

For some people some of those collaborators will provide flawed data.  Should we limit the potential impact of this possibility by not collaborating too much with any one person? Here again, I would say no. I have spent much of my career collaborating mostly with one person and with a lot of others in addition. I know of other very productive long-term completely trustworthy collaborations.

So I suggest a simple personal solution. Trust your collaborators. But what does that mean for science when someone turns out to be fraudulent?  Where is the protection there? Here again I have an answer. It is that no important idea should be validated by work from just one lab or one set of collaborators.

We should think hard about this and keep track of what truths have come from just one group. If they are close to your field, spend some time redoing, or doing similar experiments so that science progresses on a firm foundation. And of course, remember to be absolutely trustworthy yourself. I hope someone will let us know what to now believe about spider social personalities.

Posted in Collaboration, Data and analysis, Ethics | 15 Comments

A tragedy in animal behavior and heroic responses

As I write this 8 papers where data were collected by Jonathan Pruitt are somewhere in the pipeline for retraction and another 5 have been identified with data problems. Many others are being checked. Yet other papers, mostly with data collected by others and Pruitt as an author, have been cleared. This is a tragedy of many dimensions. If you have not heard about it, look at what Kate Laskowski and Dan Bolnick have written. Dan’s piece provides a link to a Google Sheet where you can follow the story article by article.

I say this is a tragedy and now I will talk about the victims. The first victim is science. How can we go forward with trying to understand our world and its players if we cannot trust the evidence behind what we read? What if people that disagree with us simply say our data cannot be trusted? What if no data can be trusted? Then what we do is no longer science. Science seems to be taking a bigger and bigger hit these days as the wonderful detective work of Elizabeth Bik also shows. Check out the images she posts on Twitter for fraudulent duplication.

If science is the first victim, scientists are the second victim. The scientists who are hurt the most are those who were collaborators with Jonathan Pruitt. They have seen years of work go down the drain. They fear for their own reputation. They mourn for the really cool ideas they thought were true. And they still might be, but evidence for that needs to be collected anew.

We need to support these collaborators in whatever way we can. They chose a path through this morass that will make science better ultimately, at some personal risk. Kate Laskowski’s brilliant piece is a model of great writing and ethical science. This is not the kind of hero she ever wanted to be, but heroics come from what we do with situations that are thrust upon us. We will all be grateful to Kate as we examine our own procedures for data purity.

I will not say Jonathan Pruitt is a victim, but he is part of the tragedy. Will we ever really know what motivated him? I decline to guess. He burst on the animal behavior scene with his first paper in 2008 and immediately began publishing at such a prolific rate that in another year or two he would have overtaken my own 41 year career in numbers of publications. This output got him a lot of academic success leading to his current position (current as I write anyway) of Canada 150 chair at McMaster University.

What Jonathan Pruitt produced was so far beyond average, it is hard to believe anyone would feel pushed to that level.  But others feel pressure to produce in academia. Perhaps there are ways we can diminish that. For example, I really loved it when the US National Science Foundation (NSF) started allowing only 10 papers on a Biographical Sketch. Reward great work, not much work.

In following posts I will talk about what I originally meant to write about: how to trust data from a collaborator. I will also write about best practices and how to teach them. I know we are all taking a moment of gratitude to our honest collaborators, a reflection on our own data collection, and sadness for this tragedy.

Posted in Collaboration, Ethics | Tagged , | 5 Comments