Monday 30 March 2020

The Covid-19 spying scandal: Do you trust your boss to watch you?

Your boss can check what you have done at any hour of any day – and many are

By Bernard Thompson


The Covid-19 lockdowns have had numerous consequences for workers, both easily predictable and less apparent.

One of the most obvious is the huge increase in working from home.

Across the world, millions of workers are discovering that there really is little need to go into the office to do their jobs, which are largely conducted using a laptop and a phone.

And while some are struggling with issues such as isolation and difficulties with self-discipline, once that daily routine has been broken, others are finding that they quite like stumbling out of bed and into their workspace.

But some of those may be enjoying those freedoms too much. At least that's the opinion of  Axos Financial Inc. CEO Gregory Garrabrants.

“We have seen individuals taking unfair advantage of flexible work arrangements,” Garrabrants  told Bloomberg News.

“If daily tasks aren’t completed, workers will be subject to disciplinary action, up to and including termination.”

Screenshot of Screenshot Monitor homepageOf course, today we have the technology to monitor workers remotely – the truth is that we've had this tech for several years but many workers were happily oblivious either to its existence or impact on them.

The methods are various and, depending on your viewpoint, either sneaky, intrusive or great ways to optimize employee performance.

Stillio, for example, takes automatic screenshots at intervals set by the user or administrator and has the capacity to email and archive the images for later review. Another example is New-Jersey-based, 
Screenshot Monitor.

Thus, your boss can check what you have done at any hour of any day to ensure that you're not taking extended lunchbreaks or surreptitiously logging off early.

Keyloggers (or keystroke loggers) have been with us in some form since the early 1970s when the Soviet Union's intelligence services developed a miniature bug that could register the movements of the printhead of the IBM Selectric II and III typewriters used in the US's Soviet embassy and consulate.
IBM Selectric III

A crude software keylogger was identified as far back as 1983 and posted to Usenet by Perry Kivolowitz.

Other early keyloggers came in the form of devices, attached to the keyboard port on a computer, and now available in USB form.

While such hardware and software often had sinister uses – such as hacking passwords or harvesting bank details – it is becoming increasingly acceptable for employers to record the keystrokes of their employees, in some form, even if ostensibly to enhance data security.

There are other varieties of spying software, however. Some packages highlight pairs of keyword searches or other activities in an attempt to flag up potentially high-risk activity.

For example, if an employee were to attempt to print or export a company's client list within a few days of writing a letter of resignation, such packages could either automatically deny the user access or send a high-priority message to HR or security.

Similar software is used in schools to help combat radicalization or other harms that might be facilitated through websurfing.

And some remote workers can be required to have an active webcam, which randomly checks that they are present and alert on their computers.

All of this raises questions that will shape our concept of the employer-employee relationship.

Many bosses will see this in simple terms – “You should be working and you've got nothing to fear, if you've nothing to hide.”

Multiple webcam picsBut that view only considers the relationship between workers and supervisors as one of control, without the element of trust.

It's a given in most organisations that careful recruitment and monitoring through performance indicators are the best means through which to maintain employee contributions. And any manager knows that high worker morale is a key factor in optimizing output.

It is conventionally assumed that, where performance is not purely assessed in terms of statistically-measurable output, workers who feel happy, trusted and valued are the most likely to return those sentiments with loyalty and “going the extra mile” when needed.

If you pay your employees, simply according to boxes filled, they will fill as many as they can. But if how carefully they pack those boxes is a factor that cannot be controlled, other factors like pride in doing a good job or attitude to the employer will come into play.

But another element of trust is how far the employee can believe in the good will and character of their boss.

For example, I knew someone who managed an industrial sales team and used a screengrabbing package similar to Stillio, mentioned above. He thought it quite reasonable and insisted that, “if they were writing a personal email or checking their bank account, I just wouldn't look at that.”

Would that reassure you? If your significant other wrote you an email about an unfortunate argument you had the previous night, would you feel comfortable that your boss might see a screenshot but would never read it?

There are “acceptable use” rules, which usually allow for some limited personal activities on company equipment, so is it acceptable that an employer or supervisor could have access to your personal information without you knowing about it?

Many would say not.

Field sales teams have always presented difficulties in terms of monitoring and control and I can think of two other examples from people I have known personally.

One was the regional manager of a trade association, who had a roving remit to visit member stores and try to recruit new members. Having almost complete autonomy to choose any area in his region (which covered the whole of Scotland), he wasn't answerable to anyone on a day-to-day basis.

At one particularly angry meeting, he faced accusations of having gone on an unreported holiday as he hadn't answered his phone for several days. That led to calls to have a tracker installed in his PC so that his activity could be checked at all times. (In the days before Find My Device and other tracking solutions were freely available.)
Hand holding phone with Find My Device

The other case was a sales rep who admitted that she often managed her own diary to fit in an extended lunch with friends or squeeze in five appointments to the timespace of four and finish early another day.

When her company started talking about insisting that sales reps have phone tracking software activated, her private argument to me was that she had been their top rep for five years and so they should judge her on her results and customer satisfaction, both of which were sky-high.

However, both of these resulted in similar outcomes. The regional manager (who my gut told me really was probably taking extra time off) resigned “on principle”, perhaps fearing that he would be dismissed.

The rep also resigned, citing dissatisfaction with the sales manager who had been trying to implement increased monitoring.

In such cases, there are arguments in favour of and against active monitoring. In the case of the regional manager, some form of monitoring, either geographical or of computer activity, might have seemed reasonable to an organisation without the resources of a full supervisory management structure.

As for the sales rep, in the end, her track record should surely have been sufficient for her company to observe the old adage, “if it ain't broke, don't fix it”.

However, I am also reminded of another contact who works as a software developer (I'm in regular contact with several programmers).

Alex is one of those guys who never stops working, including when he's supposed to be on holiday. In fact, I've been present when his boss has told him to stop working so hard because he'll burn himself out.

At a rough guess, I'd estimate that Alex probably does at least 25% more work than his average colleague and probably double the acceptable minimum.

You could argue that the fact that his boss can see his input and the timing of his merge requests, etc., is helping him to get good advice about not pushing himself so hard. And Alex's boss is a particularly caring individual who sets great store by the morale and welfare of his team.

But how many bosses are like that?

It's easy to say that software solutions are to be used to catch out the lazy worker. But what about the Alexes of the world – and there are many – who constantly deliver more than their contract requires?

How many of those workers – and the vast majority do at least the required minimum without supervision – will be paid more, told to take more vacation days, told to stop working so hard as their bosses find out precisely how much extra work is done, unbidden, every day?

How far can you really trust your boss?

Wednesday 25 March 2020

AI Bias: Speech recognition technology is 'racist'

Voice recognition tech makes more errors with African American voices

By Bernard Thompson

Speech recognition technologies are rife with racial biases, according to a new study by Stanford University.

The results, published in the journal, Proceedings of the National Academy of Sciences, showed that, on average, systems developed by Amazon, Apple, Google, IBM and Microsoft misunderstood 35% of the words spoken by African Americans compared to 19% of those spoken by white Americans.

(Scottish people like your humble writer already understand something about that.)

The Stanford tests were carried out between May and June 2019, using the same words, with participants of multiple ages and genders.

The researchers tested each of the five company’s technology with more than 2,000 speech samples from recorded interviews with white Americans and African Americans.

(It should be noted that these biases are not necessarily found in popular products such as Alexa and Siri, as this information has not been revealed.)

The error rates were highest for African American males – particularly when using vernacular.

Sharad Goel, a Stanford assistant professor of computational engineering, who oversaw the research, believes the findings show the need for independent audits of new tech: “We can’t count on companies to regulate themselves.”

Meanwhile, Ravi Shroff, a New York University professor of statistics, who explores bias and discrimination in new technologies, commented: “I don’t understand why there is not more due diligence from these companies before these technologies are released. I don’t understand why we keep seeing these problems.”

The problem appears to be an old one – that data sets used in software and Artificial Intelligence tools development are typically selected by a very specific demographic group, namely young, white males in their 20s and 30s.

As far back as 2016, Joy Buolamwini, a Ghanaian-American computer scientist and digital activist based at the MIT Media Lab, presented a popular TED Talk on the issue of what she terms as "the coded gaze" or algorithmic bias.

Ms Buolamwini founded the Algorithmic Justice League, an organisation that seeks to challenge bias in decision-making software.

In her TED Talk, she noted: “Across the US, police departments are starting to use facial recognition software in their crime-fighting arsenal. Georgetown Law published a report showing that one in two adults in the US – that's 117 million people – have their faces in facial recognition networks. Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy.”

Both Ms Buolamwini's findings and the latest Stanford study raise profound issues as AI enters more and more areas of our lives, from airport facial scanners to banking security.

For example, HSBC uses a voice recognition step requiring customers to say, “My voice is my password”, when making telephone inquiries.

It is not difficult to see the considerable inconvenience of being unable to access banking facilities or being delayed at airports, due to nothing other than the ethnicity of the individual concerned.

But, as Ms Buolamwini points out, these inherent biases could have more serious implications, should they extend to evaluating people as more or less likely to exhibit criminal behaviour or pose other theoretical risks.

One developer for a major international company (who did not want to be identified) explained the situation from his perspective: “By now, we should all know about these issues but, in reality, there is never enough time for testing so factoring in diversity just doesn't happen as it should.”

Seemingly reinforcing Professor Goel's call for regulation, the developer went on: “Typically, management are always pushing to get the products to market to start making money as soon as possible, and that's why you can expect these problems to continue.”

The very demographic factors that lead to these biases – the relative lack of diversity in Big Tech – may prove to be obstacles in finding companies, willing to invest in addressing the issues.

Creating more diverse data sets and introducing more rigorous testing with a specific focus on reducing bias requires the will from senior management to spend more money, pre-market, and accept delays that may benefit less scrupulous competitors.

But perhaps these issues will soon come back to bite the very people currently causing the problem.

With huge growth in software and AI development in Asia, maybe soon it will be white users who will be complaining that the devices impacting on their lives don't recognise their faces and voices.

And with increasingly sinister uses – such as China's use of facial-recognition as part of its policy of awarding “social points” – quite soon these applications may be making decisions on our very worth to society as human beings.

If the white males still wielding power in Silicon Valley are really smart, they will get serious about bias while they can.



Monday 26 June 2017

Death of the Salesman?



Computers will do it better

By Bernard Thompson

Selling is a special art, which many believe cannot be learned by everyone.

It takes a particular skill set to build a rapport with a client, negotiate the best deal for yourself and continually push for more without irritating the buyer to the extent that they no longer wish to co-operate.

Product knowledge, persuasiveness – “the gift of the gab” – energy, ambition, psychology and an understanding of your company's needs as well as those of the buyer are all vital.

Dodgy salesman
Sales people are typically personable, often shallow (even annoying over long periods) and usually as self-interested as they are self-motivated.

Companies hire-and-fire reps with a regularity that would frighten people in other fields, most often employing them on a sink-or-swim basis.

The best stick at it for a decade or more, making enviable money. Most fall at the first hurdle or burn out over a period of a few years.

But the person-to-person sales system we have come to know is inherently flawed, as anyone in an accounting department will know.

Tension

I was once present during a classic argument between an international sales rep and an accountant who, ironically, were newly-weds.

She argued that it was the sales team than kept any company alive, as they were the ones bringing in the cash and, without whom there would be no business. She had a point.

He countered that they cared only about their targets, would do anything to get a bonus and cared nothing for the company's profit margin. He had a point, too.

Of course, the truth is more nuanced. Different departments in companies often fail to see the value that others bring.

The accountants and financial controllers are prone to distrusting the sales teams, believing that they must be reined in in order for them to deliver value to the company in the long term, rather than being focused on the next sale at any cost.

Salespeople tend to look at minimum selling prices and fixed (even statutory) conditions as shackles imposed by people who don't understand how hard it is to go out there every day, facing customers who constantly demand more.

But we should learn something from the fact that so few people excel at sales. And that is that the system is fraught with error, which costs business around the world billions in erroneous or non-optimal sales and costs many companies their existence.

It should be obvious why.

The power of a smile

I once knew a very successful sales rep who had all of the qualities mentioned above. Smart, funny, likeable, motivated, she was the top rep in her company, year after year.

She once laughed about how a nice smile was so helpful in getting a good deal.
But think about that for a moment. Imagine the performance of your company being affected by the smile of your reps. Someone with a less appealing smile sells less. The rep has received bad news, has lost a tooth, even has a garlic-and-herb baguette for lunch and your profits fall.

And there's something else that the accountant was referring to. By instinct, sales people want to make a sale at almost any cost and that often means bending the rules.

They know the financial controller is away on holiday – then the assistant who they've been buttering up for the last year may be more forgiving in authorising sales that don't quite meet the company standard.

Or they know that someone in accounts is overworked and likely to just wave through some transactions, rather than analyse them as closely as they should. And the converse of much of this applies to the buyers' side.

But ultra-fast computers with artificial intelligence will make these issues things of the past and the evidence is already out there.

Firstly, computers trading with each other is not new. It's just extremely expensive and therefore not currently suited to small-scale transactions, even between very large companies.
Tuomas Sandholm (centre) developed
Libratus with Ph.D. student Noam Brown
Secondly, the success of Libratus in beating the world's top Texas Hold'em poker players has shown that, when costs permit, computer-to-computer trading will be a more efficient system for everyone.

They key moment will be when the cost of owning the computer (or more likely hiring the service from an outside provider) is less than that of the efficiency savings/profit increase.

Mind-reading

What was so remarkable about Libratus's victory is that it was a case of a computer, using reinforcement learning, being more effective than word-class human experts when there were unknown variables (the opponents' hands as well as the river cards).

As Wired reported player, Dong Kim, started to feel as if Libratus could see his cards. "I’m not accusing it of cheating. It was just that good."

Imagine transferring similar principles to sales, which you could also describe as being something like a game of poker – both sides know their own conditions, some of the market conditions (which we could compare to the river in Hold'em) and try to infer what the other needs to make a deal.

They then use human interaction, such as bluffing, to gain an advantage. Libratus did just that, to the astonishment even of Carnegie Mellon University professor of computer science Tuomas Sandholm who, with his PhD student Noam Brown, built Libratus.

Heralding the victory, Brown said at the time: “We didn’t tell Libratus how to play poker. We gave it the rules of poker and said 'learn on your own'.”
'It's even conceivable that you could have one company and one computer acting as both the buyer and seller.'
Applying that to sales should be easy, in time. Program it with the necessary rules and conditions, statutory, financial, ethical, timeous, etc., and you would have a computer that knows exactly how to get the best deal almost all of the time.

Companies relying on humans will be at a disadvantage, meaning that they will also have to automate the buying and selling processes.

It's even conceivable that you could have one company and one computer acting as both the buyer and seller.

The humanity

But what about the human contribution? The fact is that most people who enter sales aren't particularly good at it and that fact is generally discovered at the company's expense.

Buyers can do even more damage, with one mistaken purchasing decision threatening entire companies. (I've seen it happen, when a business manager bought an entire range of products without doing effective research. They flopped, bringing the whole division of an international company to its knees and leaving him out of a job with a fat pay-off).

The upshot of this is that, ultimately, it should be win-win for the companies that can afford it but may provide a major obstacle for startups lacking the funds.

A win-win in the sense that, for the first time buying and selling should be fully optimised, meaning that the market finds its correct level. It would also mean an end to high-pressure sales techniques preying on human weakness, or the sales rep or manager sleeping with the buyer (I've also witnessed that between two international companies – against the rules, of course) and other forms of corruption.

It will be quick, efficient, cost-effective, will reduce waste and, thanks to the Internet of Things, the logistics will also be taken care of.

It is notoriously difficult to predict timescales as technology accelerates at an unpredictable rate. But ten years seems too soon for the costs to be brought within the means of all but the biggest companies. It will probably already have started within two decades.

And what about the reps? Well, no one is saying they have to stop smiling.

How computers will replace doctors

Your doctor can only process so many details at one time. Computers can process many times more data in seconds.

 By Bernard Thompson
Do you ever wonder how good your doctor is?

When you go that that person's surgery, you are often literally putting your life in their hands but how do you measure their competence?

Naturally, doctors are people and people are imperfect so, logically, some are better than others. But, however good or bad they may be – and their patients are often in no position judge – you generally do what they tell you to do.

You put chemicals into your body at their behest, eat what they tell you to, have them make decisions for your children and trust them to spot if you are suffering from a life-threatening condition, even when you may feel quite healthy.

Thankfully, they are highly trained, mostly for about seven years at university, followed by years of practice.

But here's the rub – when you strip away the title and the aura that comes with being called “doctor”, effectively you are looking at a knowledge bank which monitors symptoms.

If they are specialists, their knowledge and experience will be highly-specific and they will have access to high-tech equipment. But when you think about it, it's technology that is advancing medical science far more than doctors.

And that suggests that doctors will one day – perhaps sooner than you imagine – be replaced by computers.

In fact, it is not too far-fetched to suggest that most people graduating in medicine today will be redundant by the time they are 60.

Doctors check symptoms and match them to knowledge they have from their training with imperfect recall and based on facts largely restricted to their medical culture.

Measuring temperature, breathing, heart-rate and blood pressure are easy, with equipment that anyone could buy at home.

Blood, urine, etc. get sent to a lab for chemical tests.

Doctors will look into your eyes and listen to your breathing but a machine could do either to a higher degree of accuracy.

They will prod you, feel around inside you but the most detailed information comes from x-rays, heart-monitors and body scanners.

That's leaving the only key human elements being the doctor's knowledge, based on your records, their empathy and instincts.

And this is where big data processing will eventually make doctors virtually redundant. Imagine a database that has every medical record in history, every medical journal, all available data on the effectiveness of every treatment worldwide.
“AI won’t replace radiologists, but radiologists who use AI will replace radiologists who don’t.”
The process has already begun in radiology.

A highly specialised field, radiology entails diagnosing and treating disease and injury, using medical imaging techniques such as X-rays, computed tomography (CT), magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), fusion imaging, and ultrasound.

But, as these outputs are prepared digitally, they can be assessed more accurately by computers using artificial intelligence than by even the best-trained humans.

In 2012, Sun Microsystems co-founder, Vinod Khosla, predicted that algorithms would replace 80 per cent of doctors, and later claimed that radiologists still practising in another 10 years would be “killing patients.”

Curtis Langlotz, a radiologist at Stanford, had a less dramatic view: “AI won’t replace radiologists, but radiologists who use AI will replace radiologists who don’t”.

The first algorithm allowed to make a medical decision without the need for a doctor to check the image was approved by the US Food and Drug Administration (FDA) in 2018.

IDx Technology's program examines retinal images to detect diabetic retinopathy and is 87 percent accurate. And, as no doctor is involved, the company has full legal liability for any medical errors.
Your doctor can only process so many details at one time. Computers can process many times more data in seconds.

Culture

It's well worth remembering that medical decisions are not simply a matter of objective science.

Doctors in different countries often have different approaches to medical care, for example being more or less likely to perform Caesarian sections at an early stage in labour.

That means that your treatment is not solely guided by what is best for you, but also by the medical culture.

But it doesn't have to be that way. Pooling that data would allow a verifiable method of establishing what is statistically the best treatment for you.

Imagine that information being applied to you, specifically, in the most minute detail.

The speed with which enormous amounts of data can be evaluated is what is likely to make this revolution happen.

Your doctor can only process so many details at one time. Computers can process many times more data in seconds. They can also identify associations that traditionally come around through specific tests or by chance.

Franks' sign, for example, (a diagonal crease on the earlobe) has been linked to cardiovascular disease, diabetes and risk of stroke.

But we are on the cusp of an era in which computers will be able to identify patterns based on symptoms that even the majority of doctors wouldn't notice. Maybe people with red hair tend to respond better to certain medication. Perhaps people with long index fingers are more prone to certain conditions.

The computers will store all this data about you and cross-reference it against billions of others across the world and throughout the ages.

They will instantly know what the latest treatments are and calculate their risks and likelihoods of success to less than a degree.

Corporate resistance

The choice of medication, therefore, should not be influenced by the drug company reps who took your doctor on a trip to Paris but by what is best for you.

As with many things, corporate resistance will be one of the biggest obstacles to be overcome as Big Pharma will not wish to allow the production of generic medicines - or to lose the advantages of having highly-incentivised sales teams lobbying providers of medical care.

Nevertheless, the day will come when it will be possible to produce medication to exactly match your body chemistry, perhaps even 3-D-printed for you at home.

But what about the human touch? Well, that is a real benefit but think about this: how many times have you had an unpleasant or even upsetting encounter with a medical professional?

Wouldn't a cold machine be, in some ways, better? And, furthermore, artificially intelligent robots will very soon be able to replicate empathetic behaviour, so talking to a nice “person” who know exactly what to say, will also be possible.

Set against the potential that technology brings, it should be easy to see how doctors can and largely will be replaced and this will not be limited to diagnosis and medication.

Operations

Mehran Anvari
Mehran Anvari controls his robot surgeon, conducting a keyhole surgery
(St Joseph's Healthcare)
Operations are already being carried out remotely by doctors controlling robots. Eventually, that robot will be autonomous, and not subject to errors of judgement or fatigue. That robot, without shaky hands, will be able to perform 24 hours a day, seven days a week, without its performance being impaired.
Do your governments actually want you to know that an expensive treatment in a far-off land has a  higher success rate than the one that is offered in your area?

Politics, economics, ethics

The questions this will raise will be largely political, economic and ethical.

Do your governments actually want you to know that an expensive treatment in a far-off land has a  higher success rate than the one that is offered in your area? (As in the case of Ashya King.)

How long do your governments really want you to live? (Improved accuracy of medical care will inevitably lead to increased longevity).

At what point will they actually withhold information from the patients and will they create an algorithm to decide just how effective your treatment is allowed to be?

Will the same computers evaluate your viability as a human and your right to good health? And, if so, what criteria will they use?

We will need answers to these questions – and soon. In the meantime, young medical students should be cautious before imagining that they are on a path to their job for life and bear in mind that, one day, they may need a new career.

Updated: March 2020

What's it all about?

This blog is intended to look at what changes we can anticipate in the near and distant future.

It will largely focus on the impacts of technology, particularly, the Internet of Things, Artificial Intelligence, big data processing, robotics and 3-D printing.

But it may venture into other areas, such as politics and social issues.

Naturally, there will be speculation involved and sometimes it will be wrong.

But, if it was easy, where would the fun be?