While there is lot’s of advice about supervisors setting goals with new staff, my take on it is a little bit different – one supervisory goal I now have for new staff is to sit down with them in the first 60 days with their job description and UCSF’s performance evaluation form and ratings rubric and have what I call a calibrating conversation about what it means to perform well in their role. My goal is to reduce wasted energy and avoid one of the three most common causes employee stress – unclear job expectations.

I have a number of friends and a few colleagues who have just had the wind taken out of their sails from spending all of their energy over a year working on the ‘wrong thing’ or doing things the ‘wrong way’ according to their boss.  They feel they found this out only when they received a less than stellar rating on their performance evaluation. It was hard to recover their motivation, confidence, and productivity and their trust in their supervisor.

Why have a calibrating conversation? To reveal strong preferences.

Like many managers, I have strong preferences that aren’t universal – about everything from performance goals, behavior, criteria to use to make decisions, to how to appropriately respond to complex situations, criteria about boundaries and types of red flags that I would like reported up to me, etc. On the other hand, I am incredibly flexible about how a staff person achieves our goals, so long as they don’t violate university policy and align with OCPD’s stated values. What’s funny is how many managers (myself included) unconsciously believe these performance and conduct preferences are just self-evident truths held by every professional – so we let ourselves off the hook of articulating them clearly.

Onboarding a new staff person is basically about trying to orient someone to a new world; understanding the organization’s expectations and this team’s norms can be like trying to learn an entirely new language and culture, which can be hard. In this context, I think you need both the job description and the performance evaluation form (with the rubric) to really discuss and calibrate performance expectations.

Three steps to a calibrating conversation

  • First: I first try to envision what I (perhaps unconsciously) expect their performance and their success look to like.  To do that, I just backward design writing their performance evaluation a year from now – I imagine the type of examples I would write into their evaluation as evidence of the highest rating. We then we talk about those examples, and I use examples of other staff’s achievements to paint a picture about what excellence looks like. This gives a staff person ideas to go about meeting those expectations in their own way.
  • Second: We discuss the text of specific job responsibilities written in their job description – for example, almost all of OCPD’s program directors have language about developing a strategy, engaging stakeholders and problem-solving in their job description.  But what exactly do I mean by “Develop a strategy” or to “Engage stakeholders” and how is that similar or different from the expectations at their previous organization? What criteria would I like them to consider when I ask them to “Use their judgment to effectively problem solve” – another common phrase in job descriptions these days.Verbs like “Develop,” “Engage,” and “Use” are outcomes with no substantive metrics tied to them. In the absence of these metrics, the prudent answer to the question “What should I do?”  is always met with “More,” and the answer to “Should I do X or Y?”  is always “Both.”  I’ve seen this lead to burnout as the staff person wastes time trying to do to everything to cover all the possibilities of what the supervisor meant. Conversations highlighting examples lets staff begin to triangulate what the definition of success looks like when executing their job responsibilities.
  • Third: We look at my institution’s performance rating criteria:  UCSF has some helpful criteria around articulating both standard job description expectations and evaluative criteria for assessing and rating employee performance.  But the criteria can be a little broad and unexpected.For example, at my institution, there is a rating criterion in the performance evaluation called “Job Knowledge.” The rating of a person’s ability in this area is partially evaluated in the following way:
    • Meets: Demonstrates a working knowledge of and competency in the skills and duties of the position. (the baseline that staff need to meet)
    • Meets All: Openly shares knowledge with others. (a higher functioning staff person)
    • Consistently Exceeds: Is sought out by clients, peers, and leaders to provide input on issues. (the highest rating)

It helps to align our understanding of what it means to “demonstrate a working knowledge.” So we discuss examples of what it would look like in a typical week. It also helps for staff to realize that some benchmarks are something that they have power over (choosing “to openly share their knowledge with others”), while other criteria would be something they need to intentionally cultivate (signal that they are open “to be[ing] sought out by clients, peers and leaders to provide input on issues”).

Consider one of my counselors who has an hour and is wondering if they should fill it with another counseling appointment, or respond to a faculty person’s question about consulting opportunities for biomedical trainees. We track the number of counseling appointments; I don’t collect stats on the number of faculty who reach out to my staff for a consult. Without the specific insight on evaluation criteria, the staff person might automatically decide that the right answer is to put off the faculty member and respond to the request for another counseling appointment, or might try, once again, to do both – and start down the road to burnout. Without signaling, they might be afraid that if they decide to eliminate a counseling appointment from their schedule they will be penalized for it. So it’s important for me to signal to them that I value them speaking with faculty, that they should tell me that they do in 1:1s, and they will not be penalized for having fewer appointments with they are meeting another stated goal of mine.

Additionally, the frequency benchmark for these definitions is usually up to the supervisor’s interpretation. So in addition staff not knowing that being sought out by peers is a standard for the highest rating – the rating guide is quiet on how many times this should happen. That expectation lies solely in the supervisor’s brain (if even there). Calibrating conversations are necessary because these are examples of performance evaluation criteria that are up to the personal interpretation of the manager.

At the end of the day,  a lot of the work that the OCPD team does is done outside of my presence. Our discussions – particularly reviewing both a job description and performance evaluation/rating rubric together – aligns our understanding of which decisions and actions are appropriate and why. This is the basis of raising their own productivity, confidence, autonomy and our mutual trust.