T@W Weekly: Diversity Challenges

Labor laws, LinkedIn, bad hiring, privacy, and "Columbusing"

The Word: Unprepared

Are we prepared for an automated, AI-driven, robotic future? While some organizations and segments of the workplace might be more prepared than others, there is at least one area that is simply not ready: Our workplace laws.

Ah yes, I know for some people, laws are antiquated ideas that are being disrupted out of existence. The lack of meaningful laws regulating new technologies is a feature, not a bug.

Jeffrey Hirsch details this phenomenon in on The Conversation. It’s a must read:

All that automation yields data that can be used to analyze workers’ performance. Those analyses, whether done by humans or software programs, may affect who is hired, fired, promoted and given raises. Some artificial intelligence programs can mine and manipulate the data to predict future actions, such as who is likely to quit their job, or to diagnose medical conditions.

If your job doesn’t currently involve these types of technologies, it likely will in the very near future. This worries me—a labor and employment law scholar who researches the role of technology in the workplace—because unless significant changes are made to American workplace laws, these sorts of surveillance and privacy invasions will be perfectly legal.

Read the whole damn thing. In a better world, the largest group of HR leaders would be proactively taking this issue on, pushing for reasonable regulation at the federal level while putting voluntary measures into place that show a commitment to privacy and upholding the spirit of workplace laws.

It may be the wild, wild west when it comes to privacy but people leaders inside organizations don’t have to act like it.

What the Click?

  • LinkedIn lost its appeal to restrict hiQ’s access to publicly available information on its platform. This is a continuation of many years of LinkedIn battling so-called scrapers from using information that its users make public. BONUS: An inside explanation on the LinkedIn algorithm.

  • Josh Bersin gives a preview of his HR Tech Disruptions 2020 report. You’ll be able to get the full report if you’re attending one of LRP’s HR Technology conferences.

  • Monster.com said a third-party exposed user data. Privacy continues to be a major issue that goes underreported in HR tech circles. Think about the loads of confidential information that many of these technologies hold. I’m just waiting for the Equifax moment to hit this space.

  • Why does HR keep buying the wrong software? This brief rundown from Rachel Ranosa at HRD HRTechNews gives a good picture of what goes wrong. The underpinning of a lot of failures: Not defining what success looks like.

  • John Sumser takes a look at culture and understanding its true nature. He also introduced me to a new term: Columbusing. It’s the act of thinking you discovered something new and novel when it’s been there for a long time.

People are Bad at Hiring

There are a lot of people in the world who believe they can make a great hiring decision without a ton of information. These people are incorrect. Whether it’s a scout for an NBA team or a hiring manager in a distribution warehouse, your pure instincts for making a good hire are often driven by luck and an average hit rate on candidates.

I used to be one of these people, so I know the challenge. I’ve made many great hires over the course of my career, and I’ve also recommended great folks that turned out to be wonderful. And, like everyone else, I’ve also made a ton of bad hires, though those I try to forget.

Could AI help? That’s what Ben Eubanks supports on the Lighthouse Research & Advisory blog:

The data show that the common ways we interview and many of the methods companies use to rank candidates (university, college grades, or other demographic data) are highly unreliable statistically. Translation: they are terrible as a gauge for whether someone can do a job or not.

Instead we should rely on more reliable types of data sources, such as job samples (let someone try the job before they are hired to do the job), assessments, or structured/regimented interviews. If we can use these more predictive types of data, we CAN make better hires and improve quality of hire. And, interestingly enough, AI is positioned to do just that.

It is easy to blame AI (I have!). I love scapegoating new technology for all of our social ills. I do think that AI could help reduce the terribleness that our selection process promotes.

Most of the vendors I’ve talked to aren’t there yet. And to be fair, most organizations can’t even figure out how to flush out unreliable decision making from their hiring processes either. Diverse AI teams and trainable, unbiased data are big asks when it is rare to get one of them, much less both.

That said, we can do a lot better on selection with tools already at our disposal. The sad part is that most organizations don’t even try to get much better at a more rigorous, proven process.

T@W Podcast of the Week

Don MacPherson’s 12 Geniuses podcast is back for a new season, kicking off with Daniel Pink. They talk about the science of timing, when to tackle your most important work and when to handle the mindnumbing (but necessary) bullshit. It should empower everyone to take more control of your schedule — if you can make the time to listen to it (or read Pink’s book).

And Finally… Understanding Diversity Challenges

Diversity is an important initiative, but as I’ve covered here at T@W, it can be a “check the box” sort of exercise. It’s a cynical view, but it’s a reality for too many organizations I’ve spoken to.

There’s the other side of the pendulum, which is all about swinging that rainbow flag, sitting crosslegged around the campfire holding hands, and other garden variety, surface-level wokeness that accomplishes very little in practice.

I’m just a white guy with an opinion on everything, but I think we can all agree that if D&I is as important as we all say, we should be looking at creating conditions that drive it as a meaningful outcome. And one of those steps is understanding the inherent challenges of diversity: Increasing trust within diverse teams.

Ignore the headline of the press release about some research out of The Netherlands on diversity and dig into some of the meat. Dr. Meir Shemla, Associate Professor of Organisational Behaviour at Rotterdam School of Management said:

A benefit to diversity can be the wider range of information made available, but whether people share information in a team depends on how they view the team and its diversity. If they feel like they are working separately to each other, diversity is viewed negatively and becomes an obstacle to sharing information. If team members are encouraged to view the team as a whole, they are more likely to share information since the diversity of views and ideas are seen as useful to the group.

We found that it’s not enough to be diverse; we also have to manage how people view this diversity. Team members should feel part of one team working towards a common goal. People shouldn’t simply be reduced to representatives of their categories, such as ‘male’ or ‘female’, but seen as unique individuals with beneficial qualities.

Thinking intentionally about how you create teams and how you construct the goals around initiatives can increase the likelihood of high performance teams and better support for diversity. Seems like a win-win to me.

Cheers, Lance