In the first post of this series, I explained how my funding source (and therefore my job description) has changed in the last couple of months, specifically to include more project management. Still undecided about pursuing formal PM training and qualifications, I'm currently trying to adopt the optimal combination of best practices from my prior experiences to apply to each of several projects.
AKA, winging it.
My adaption of the stand-up meetings we used in my industry position seems to be working pretty well for my biggest, broadest, overarching project that encompasses several distinct yet interconnected sub-projects. However, I'm going to be trying a different approach to managing some of my other, smaller, stand-alone projects.
As I mentioned last time, I'd love to hear your thoughts and experiences (more best practices to adopt!)
"Wake up, n00b. The metrics have you": the industry experience
My former company was all about the metrics.
The big pushes that required stand-up meetings also required metrics. We all had quantitative targets to hit, and reported back on them at regular intervals. My own metrics included completing a certain number of product inserts, labels, internal documents etc. My progress was tracked and graphed and disseminated and analysed. R&D, QC, QA, Sales, and Tech Support all had their own progress similarly tracked.
I fit into this system rather well. I like numbers. I love graphs. And the system really did make it easier to spot bottlenecks and other problems, and keep people on track; if QC's "signed off on product" metrics are lagging three weeks behind R&D's "released to QC" metrics, there's a problem. If Marketing get so sick of making labels that they decide to shirk their duties and spend three days doodling ideas for the next ad campaign and/or staring out the window instead, their "signed off on label" graph line will plateau well short of its target value and people will comment (this never actually happened, although I was sorely tempted at times).
We used metrics outside of big product launch pushes, too. Everyone was assigned their own metrics for the next year at their annual review; complete five email ad campaigns, publish twelve print ads, launch 20 new products, grow sales of product X by 5%, etc. Our metrics were reviewed quarterly with our immediate supervisor, and were used to assess performance in the next annual report. Meeting or exceeding your assigned metrics was a good way to get the maximum performance-related pay increase (or bonus, some years; I myself managed to work at that company during the only two years in its history in which no bonuses were paid).
I did not like this use of metrics, and will definitely not be proposing to introduce it system in my new job! Job performance is about quality as well as quantity, and in my opinion we relied far too much on measuring the latter. For specific projects, though, and if done right, the type of metrics I described first can be a very useful tool.
"What is the metrics?": persuading academics to take the red pill
I'm managing a new translational research project that will start recruiting patients in February. The team includes a nurse, radiologists, a medical oncologist, a statistician, pathologists, molecular biologists, and bioinformaticians. Everyone's role is very clearly laid out in the protocol, and we have a clear target number of patients to recruit. On a day-to-day basis, the nurse and radiologists don't need to know about the DNA sequencing steps. The statistician doesn't need to know about patient recruitment. The bioinformatics team don't need to know about the pathological assessment of the samples.
For these reason, and because organising meetings with clinicians is a bitch, I'm going to manage this project by getting everyone to report on specific metrics by email on a monthly basis. I'll compile the numbers, and no doubt spend a happy few hours fiddling about with graphs to figure out the best way of visualising our progress.
Here's a blog-safe version of the spreadsheet I'm going to ask people to fill in each month:
The whole team will meet after each of the first five patients has cleared the first three phases, to make sure everything is working as it should and that no steps are being missed. After we're up and running, I'll set up a monthly alert in Outlook as if this was a meeting, reminding everyone to fill in the numbers for the tasks assigned to them in the spreadsheet, and email their version to me. I'll obviously also make sure that all problems are reported to me. This is a good team and I have no worries about compliance - I won't even need to bribe them with brownies!
This will be the first time I've used a metrics reporting system to help me manage a research project. I'm sure there will be wrinkles to iron out, but overall I'm reasonably confident that it'll work. After assessing how it's going in this new project that I can set up as I like, I'll also impose the system onto some of our department's existing stand-alone projects. I already have the metrics listed for one of these projects - an easy task if the grant proposal is well organised and progress is readily quantifiable. A couple of people (i.e. the statistician and lead bioinformatician) will end up with multiple sets of metrics to complete, but most other lab members are only involved in one or maybe two of these projects and won't have to spend much time on it at all. As with the meetings it may take time for people to adapt and form the appropriate habits, but hopefully everyone will be reasonably happy with the system.
Especially me. Remember, I get to play with graphs!