Banks increasingly implementing AI-driven monitoring systems

Banks increasingly implementing AI-driven monitoring systems

22. April 2021 0 Von Horst Buchwald

Banks increasingly implementing AI-driven monitoring systems

New York, 4/22/2021

Several U.S. banks are implementing AI-driven surveillance systems to monitor customer and employee behavior and other activities on their premises. City National Bank of Florida, JPMorgan Chase, and Wells Fargo have tried or implemented computer vision systems so far.
City National plans to implement facial recognition near ATMs and other areas in place of existing authentication methods at 31 branches. The trials will begin in 2022.
J.P.Morgan is conducting „a small test of video analytics technology with a handful of Ohio branches.“ The test began in 2019 using archived CCTV footage as opposed to a live data stream. Among the bank’s goals was to identify customer preferences, such as whether women avoid ATMs in small spaces or the number of customers who leave the bank when there is a queue. Wells Fargo plans to use the technology to prevent fraud. One unnamed bank wants to analyze potential security issues, such as homeless people pitching tents near ATMs, people loitering near the premises, or when vault/office doors are left open.
Critics have pointed to concerns about possible income or racial discrimination in account monitoring and inaccurate facial recognition, both of which violate basic privacy rights. Several bank officials said they will be aware of potential social problems as the software is rolled out.
For similar reasons, some companies have banned the use of this controversial technology. Examples:
In January, Portland, Ore. banned government and private use of facial recognition software for all entities, including police, airports, businesses, etc.
Boston, San Francisco and Oakland, Calif. have all banned public facial recognition in recent years.
Rite Aid, a nationwide drugstore chain, stopped using its computer vision software in 2020 after it was found to be installed in predominantly low-income, non-white neighborhoods.
The EU wants to introduce rules banning the use of AI for social scoring, assessing trading habits and vulnerabilities, and grouping or discriminating against people.