Welcome to Founder Focus

Latino-Owned Businesses Are Booming, but Access to Funding Remains a Challenge

Smaller Businesses Looking to Human Workers, Not AI, to Boost Productivity

This Black Founder Is Bringing Top VC Investors With Him as He Tackles the Broken Consumer Debt Market

What a New York City Business Is Doing About Viral Violence

Nobel Laureate Daniel Kahneman Dies at 90

The U.S. Treasury Department has issued a report on the growing risks to banks, other financial organizations, and their business and consumer clients from criminals using artificial intelligence (AI) applications in fraud. The good news within that otherwise dire warning, however, is increased data sharing will greatly reduce that threat, which may also generate new business activities for responsive entrepreneurs.

The Treasury report lays out the multifaceted problem of increased AI use by scammers that directly target financial institutions, and through them the companies and individuals they serve. The study stemmed from an October executive order by President Joe Biden, calling for effective regulation of AI. The Treasury responded by sharing the lessons of its consultation with "42 financial services sector and technology related companies."

That report, officials say, marks the first step in what must become a longer, wider partnership between government agencies and both big and small banks in the fight against growing fraud using AI.

"Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability," said Under Secretary for Domestic Finance Nellie Liang. "Treasury's AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud."

That 10-item list offers banks and other financial businesses of all sizes reference points for reinforcing advanced defensive AI programs to detect and thwart scammers seeking access to their systems.

Those remedies include creation of "a common AI lexicon," "best practices for data supply chain mapping," and "regulatory coordination." Also necessary, it adds, is improved "explainability of advanced machine learning models, particularly generative AI" to make what are vital but often mystifying technologies accessible to all financial sector actors.

It also calls for broader information sharing capabilities to close what the report termed "the fraud data divide." That will require the collective pooling of huge reserves of client, financial, and operational data that big firms possess and use to more effectively and economically create fraud-thwarting models--and which smaller peers would also be able to rely on in that goal, too.

But that is the exact point where Narayana Pappu, CEO San Francisco-based provider of data security and privacy compliance solution provider Zendata, says new business-creation opportunities arise.

Because that collection of shared bank data will be enormous and diverse, Pappu tells payment tech site PYMNTS, entrepreneurs capable of helping financial companies sift through and identify actionable information to develop defensive AI tools will be in great demand.

"Data standardization and quality assessment would be a ripe opportunity for a startup to offer as a service," Pappu told the site. "Techniques, such as differential privacy, can be used to facilitate information between financial institutions without exposing individual customer data, which might be a concern preventing smaller financial institutions from sharing information with other financial institutions."

It would also permit new businesses to generate income creating AI defenses that will allow clients save billions in collective losses to scammers annually. Last year the FBI's Internet Crime Complaint Center received over 880,000 reports of online fraud representing up to $12.5 billion in losses--a 22 percent increase over 2022. Financial services ranked fourth in sectors targeted, after healthcare, critical industry, and government facilities.

Now accepting applications for Inc.’s Best Workplace awards. Apply by February 16 for your chance to be featured!

Sign up for our weekly roundup on the latest in tech

Privacy Policy

QOSHE - Rise in AI Fraud Spurs Government-Financial Sector Cooperation to Protect Against It - Bruce Crumley
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Rise in AI Fraud Spurs Government-Financial Sector Cooperation to Protect Against It

9 0
29.03.2024

Welcome to Founder Focus

Latino-Owned Businesses Are Booming, but Access to Funding Remains a Challenge

Smaller Businesses Looking to Human Workers, Not AI, to Boost Productivity

This Black Founder Is Bringing Top VC Investors With Him as He Tackles the Broken Consumer Debt Market

What a New York City Business Is Doing About Viral Violence

Nobel Laureate Daniel Kahneman Dies at 90

The U.S. Treasury Department has issued a report on the growing risks to banks, other financial organizations, and their business and consumer clients from criminals using artificial intelligence (AI) applications in fraud. The good news within that otherwise dire warning, however, is increased data sharing will greatly reduce that threat, which may also generate new business activities for responsive entrepreneurs.

The Treasury report lays out the multifaceted problem of increased AI use by scammers that directly target financial institutions, and through them the companies and individuals they serve. The study stemmed from an October executive order........

© Inc.com


Get it on Google Play