Monday 30 March 2020

The Covid-19 spying scandal: Do you trust your boss to watch you?

Your boss can check what you have done at any hour of any day – and many are

By Bernard Thompson


The Covid-19 lockdowns have had numerous consequences for workers, both easily predictable and less apparent.

One of the most obvious is the huge increase in working from home.

Across the world, millions of workers are discovering that there really is little need to go into the office to do their jobs, which are largely conducted using a laptop and a phone.

And while some are struggling with issues such as isolation and difficulties with self-discipline, once that daily routine has been broken, others are finding that they quite like stumbling out of bed and into their workspace.

But some of those may be enjoying those freedoms too much. At least that's the opinion of  Axos Financial Inc. CEO Gregory Garrabrants.

“We have seen individuals taking unfair advantage of flexible work arrangements,” Garrabrants  told Bloomberg News.

“If daily tasks aren’t completed, workers will be subject to disciplinary action, up to and including termination.”

Screenshot of Screenshot Monitor homepageOf course, today we have the technology to monitor workers remotely – the truth is that we've had this tech for several years but many workers were happily oblivious either to its existence or impact on them.

The methods are various and, depending on your viewpoint, either sneaky, intrusive or great ways to optimize employee performance.

Stillio, for example, takes automatic screenshots at intervals set by the user or administrator and has the capacity to email and archive the images for later review. Another example is New-Jersey-based, 
Screenshot Monitor.

Thus, your boss can check what you have done at any hour of any day to ensure that you're not taking extended lunchbreaks or surreptitiously logging off early.

Keyloggers (or keystroke loggers) have been with us in some form since the early 1970s when the Soviet Union's intelligence services developed a miniature bug that could register the movements of the printhead of the IBM Selectric II and III typewriters used in the US's Soviet embassy and consulate.
IBM Selectric III

A crude software keylogger was identified as far back as 1983 and posted to Usenet by Perry Kivolowitz.

Other early keyloggers came in the form of devices, attached to the keyboard port on a computer, and now available in USB form.

While such hardware and software often had sinister uses – such as hacking passwords or harvesting bank details – it is becoming increasingly acceptable for employers to record the keystrokes of their employees, in some form, even if ostensibly to enhance data security.

There are other varieties of spying software, however. Some packages highlight pairs of keyword searches or other activities in an attempt to flag up potentially high-risk activity.

For example, if an employee were to attempt to print or export a company's client list within a few days of writing a letter of resignation, such packages could either automatically deny the user access or send a high-priority message to HR or security.

Similar software is used in schools to help combat radicalization or other harms that might be facilitated through websurfing.

And some remote workers can be required to have an active webcam, which randomly checks that they are present and alert on their computers.

All of this raises questions that will shape our concept of the employer-employee relationship.

Many bosses will see this in simple terms – “You should be working and you've got nothing to fear, if you've nothing to hide.”

Multiple webcam picsBut that view only considers the relationship between workers and supervisors as one of control, without the element of trust.

It's a given in most organisations that careful recruitment and monitoring through performance indicators are the best means through which to maintain employee contributions. And any manager knows that high worker morale is a key factor in optimizing output.

It is conventionally assumed that, where performance is not purely assessed in terms of statistically-measurable output, workers who feel happy, trusted and valued are the most likely to return those sentiments with loyalty and “going the extra mile” when needed.

If you pay your employees, simply according to boxes filled, they will fill as many as they can. But if how carefully they pack those boxes is a factor that cannot be controlled, other factors like pride in doing a good job or attitude to the employer will come into play.

But another element of trust is how far the employee can believe in the good will and character of their boss.

For example, I knew someone who managed an industrial sales team and used a screengrabbing package similar to Stillio, mentioned above. He thought it quite reasonable and insisted that, “if they were writing a personal email or checking their bank account, I just wouldn't look at that.”

Would that reassure you? If your significant other wrote you an email about an unfortunate argument you had the previous night, would you feel comfortable that your boss might see a screenshot but would never read it?

There are “acceptable use” rules, which usually allow for some limited personal activities on company equipment, so is it acceptable that an employer or supervisor could have access to your personal information without you knowing about it?

Many would say not.

Field sales teams have always presented difficulties in terms of monitoring and control and I can think of two other examples from people I have known personally.

One was the regional manager of a trade association, who had a roving remit to visit member stores and try to recruit new members. Having almost complete autonomy to choose any area in his region (which covered the whole of Scotland), he wasn't answerable to anyone on a day-to-day basis.

At one particularly angry meeting, he faced accusations of having gone on an unreported holiday as he hadn't answered his phone for several days. That led to calls to have a tracker installed in his PC so that his activity could be checked at all times. (In the days before Find My Device and other tracking solutions were freely available.)
Hand holding phone with Find My Device

The other case was a sales rep who admitted that she often managed her own diary to fit in an extended lunch with friends or squeeze in five appointments to the timespace of four and finish early another day.

When her company started talking about insisting that sales reps have phone tracking software activated, her private argument to me was that she had been their top rep for five years and so they should judge her on her results and customer satisfaction, both of which were sky-high.

However, both of these resulted in similar outcomes. The regional manager (who my gut told me really was probably taking extra time off) resigned “on principle”, perhaps fearing that he would be dismissed.

The rep also resigned, citing dissatisfaction with the sales manager who had been trying to implement increased monitoring.

In such cases, there are arguments in favour of and against active monitoring. In the case of the regional manager, some form of monitoring, either geographical or of computer activity, might have seemed reasonable to an organisation without the resources of a full supervisory management structure.

As for the sales rep, in the end, her track record should surely have been sufficient for her company to observe the old adage, “if it ain't broke, don't fix it”.

However, I am also reminded of another contact who works as a software developer (I'm in regular contact with several programmers).

Alex is one of those guys who never stops working, including when he's supposed to be on holiday. In fact, I've been present when his boss has told him to stop working so hard because he'll burn himself out.

At a rough guess, I'd estimate that Alex probably does at least 25% more work than his average colleague and probably double the acceptable minimum.

You could argue that the fact that his boss can see his input and the timing of his merge requests, etc., is helping him to get good advice about not pushing himself so hard. And Alex's boss is a particularly caring individual who sets great store by the morale and welfare of his team.

But how many bosses are like that?

It's easy to say that software solutions are to be used to catch out the lazy worker. But what about the Alexes of the world – and there are many – who constantly deliver more than their contract requires?

How many of those workers – and the vast majority do at least the required minimum without supervision – will be paid more, told to take more vacation days, told to stop working so hard as their bosses find out precisely how much extra work is done, unbidden, every day?

How far can you really trust your boss?

Wednesday 25 March 2020

AI Bias: Speech recognition technology is 'racist'

Voice recognition tech makes more errors with African American voices

By Bernard Thompson

Speech recognition technologies are rife with racial biases, according to a new study by Stanford University.

The results, published in the journal, Proceedings of the National Academy of Sciences, showed that, on average, systems developed by Amazon, Apple, Google, IBM and Microsoft misunderstood 35% of the words spoken by African Americans compared to 19% of those spoken by white Americans.

(Scottish people like your humble writer already understand something about that.)

The Stanford tests were carried out between May and June 2019, using the same words, with participants of multiple ages and genders.

The researchers tested each of the five company’s technology with more than 2,000 speech samples from recorded interviews with white Americans and African Americans.

(It should be noted that these biases are not necessarily found in popular products such as Alexa and Siri, as this information has not been revealed.)

The error rates were highest for African American males – particularly when using vernacular.

Sharad Goel, a Stanford assistant professor of computational engineering, who oversaw the research, believes the findings show the need for independent audits of new tech: “We can’t count on companies to regulate themselves.”

Meanwhile, Ravi Shroff, a New York University professor of statistics, who explores bias and discrimination in new technologies, commented: “I don’t understand why there is not more due diligence from these companies before these technologies are released. I don’t understand why we keep seeing these problems.”

The problem appears to be an old one – that data sets used in software and Artificial Intelligence tools development are typically selected by a very specific demographic group, namely young, white males in their 20s and 30s.

As far back as 2016, Joy Buolamwini, a Ghanaian-American computer scientist and digital activist based at the MIT Media Lab, presented a popular TED Talk on the issue of what she terms as "the coded gaze" or algorithmic bias.

Ms Buolamwini founded the Algorithmic Justice League, an organisation that seeks to challenge bias in decision-making software.

In her TED Talk, she noted: “Across the US, police departments are starting to use facial recognition software in their crime-fighting arsenal. Georgetown Law published a report showing that one in two adults in the US – that's 117 million people – have their faces in facial recognition networks. Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy.”

Both Ms Buolamwini's findings and the latest Stanford study raise profound issues as AI enters more and more areas of our lives, from airport facial scanners to banking security.

For example, HSBC uses a voice recognition step requiring customers to say, “My voice is my password”, when making telephone inquiries.

It is not difficult to see the considerable inconvenience of being unable to access banking facilities or being delayed at airports, due to nothing other than the ethnicity of the individual concerned.

But, as Ms Buolamwini points out, these inherent biases could have more serious implications, should they extend to evaluating people as more or less likely to exhibit criminal behaviour or pose other theoretical risks.

One developer for a major international company (who did not want to be identified) explained the situation from his perspective: “By now, we should all know about these issues but, in reality, there is never enough time for testing so factoring in diversity just doesn't happen as it should.”

Seemingly reinforcing Professor Goel's call for regulation, the developer went on: “Typically, management are always pushing to get the products to market to start making money as soon as possible, and that's why you can expect these problems to continue.”

The very demographic factors that lead to these biases – the relative lack of diversity in Big Tech – may prove to be obstacles in finding companies, willing to invest in addressing the issues.

Creating more diverse data sets and introducing more rigorous testing with a specific focus on reducing bias requires the will from senior management to spend more money, pre-market, and accept delays that may benefit less scrupulous competitors.

But perhaps these issues will soon come back to bite the very people currently causing the problem.

With huge growth in software and AI development in Asia, maybe soon it will be white users who will be complaining that the devices impacting on their lives don't recognise their faces and voices.

And with increasingly sinister uses – such as China's use of facial-recognition as part of its policy of awarding “social points” – quite soon these applications may be making decisions on our very worth to society as human beings.

If the white males still wielding power in Silicon Valley are really smart, they will get serious about bias while they can.