" />




The Harlow Report

The Harlow Report-GIS

2025 Edition

ISSN 0742-468X
Since 1978
On-line Since 2000


GIS News Snippets

For the week of
December 15, 2025


 Remember When?
A “Harlow Report” From December 16, 2024 —

Geocoding: The First Step Towards Unlocking Location Intelligence

by  Ryan Peterson

Microsoft Azure Maps is a collection of mapping and location APIs that enable enterprises to add location intelligence into their solutions. Learn more.

Microsoft Azure Maps is a collection of mapping and location APIs that enable enterprises to add location intelligence into their solutions. The Geocoding Service is one of the most prominently used Maps APIs that enables geocoding of addresses (conversion of textual addresses into geographic coordinates) and vice versa. For example, a coffee shop name or address such as 1124 Pike St, Seattle can be converted into 47.61403,-122.32820 (latitude and longitude coordinates) so that it can be placed and visualized on a map or used to calculate distance metrics.

Geocoding is one of the most crucial steps in location intelligence, as it sets the foundation for all advanced location analytics that can be done once addresses have been accurately plotted on a map. For example, after identifying the coffee shop's location, you can find the nearest grocery stores or other outlets of interest.

 Read full story at Microsoft Blog


An Introduction to Metadata for Geographic Information

by  Henri J.G.L. Aalders

The International Organization for Standardization (ISO) 19115 Metadata standard defines and standardizes a comprehensive set of metadata elements

Summary

The ISO 19115 Metadata standard establishes a comprehensive framework for documenting geographic data. Designed to serve a “global community” in a multi-lingual environment, it ensures compatibility with wider IT standards while supporting specific geographic information needs.

This standard applies to all levels of geographic data — from dataset series to individual features – and defines both mandatory minimums and optional elements for extensive descriptions. Crucially, ISO 19115 allows for the extension of metadata definitions without sacrificing interoperability. This flexibility ensures that extended metadata remains comprehensible to other users' browsing requirements for specific geographic data types. By standardizing these elements, ISO 19115 facilitates the efficient discovery, understanding, and exploitation of geographic information across diverse systems and international borders.

 Read full story at ScienceDirect


Gemini Integrates Google Maps for Rich Local AI Results with Photos and Ratings

by  Staff

According to @GeminiApp, Gemini now delivers local search results in a visually rich format, incorporating photos, ratings, and real-world information from Google Maps directly into its AI platform

Summary

Google’s Gemini AI introduced a major upgrade on December 11, 2025, integrating directly with Google Maps to deliver rich, visual local search results complete with photos, ratings, reviews, and real-time information. Announced via the official Gemini account, this feature transforms traditional text-based responses into immersive, context-aware answers for queries about restaurants, landmarks, and other places.

The update leverages Google Maps’ database of over 250 million locations and aligns with industry trends toward multimodal AI. Analysts predict that by 2026 more than 40 percent of AI assistants will include visual elements. For businesses, the integration boosts visibility in local search—potentially driving foot traffic and inquiries—while opening new advertising opportunities within AI-generated suggestions.

Competing against ChatGPT, Apple Maps, and Bing AI, Gemini gains a clear advantage through Google’s seamless access to real-time mapping data. This advancement not only enhances user experience but also accelerates growth in the $150 billion local search market, setting a new standard for hyper-local AI in 2025 and beyond.

 Read full story at Blockchain News


Google Maps Quietly Added This Long-Overdue Feature for Drivers

by  Tim Hardwick

Google Maps on iOS quietly gained a new feature recently that automatically recognizes where you've parked your vehicle and saves the location for you.

Summary

Google Maps introduced a convenient new feature for iOS users, announced on LinkedIn by senior product manager Rio Akasaka. The app now automatically detects and saves your parked car location when connected to your vehicle via USB, Bluetooth, or CarPlay, without needing the manual parking pin.

The saved spot appears as a pin upon opening Maps after parking and persists for up to 48 hours or until you drive again, when it auto-removes. Additionally, it uses any custom car icon you’ve selected — introduced in 2020 with recent additions — instead of the default “P” marker.

While manual parking saving has long been available, this automation enhances usability. The feature rolled out on iPhone about a month ago and remains iOS-exclusive; Android offers a similar reminder but requires manual removal.

 Read full story at MacRumors


How One Spanish Region Revolutionized Mapping with AI

by  Mark Cygan

The Government of Cantabria's Cartography and Geographic Information System Service uses AI-powered mapping to guide millions of visitors toward sustainable choices while protecting the region' beaches, forests, and natural heritage from overuse.

Summary

In Spain’s Cantabria region, a small team of six cartographers led by Gabriel Ortiz uses computer vision and deep learning to protect the coastline that attracts millions of visitors. By training AI on decades of aerial imagery, they track beach overcrowding, illegal parking in sensitive areas, the surprising doubling of forest cover since the 1950s, and the spread of invasive pampas grass.

Their “summer beach algorithm” counts visitors and calculates environmental pressure per square meter, guiding tourists toward less-crowded shores while helping officials plan sustainable transit. Open-data maps and mobile apps put this analysis directly into citizens’ hands, replacing costly manual surveys with automated, region-wide monitoring.

Ortiz sees computer vision as a “force multiplier” and digital twins as cartography’s future—lifelike 3D replicas that make complex landscapes intuitively understandable and foster greater public engagement in preservation.

 Read full story at Esri Blog


How to Improve Lidar Accuracy and Quality

by  Emily Hunt & David McKittrick

Efficiently enhance lidar accuracy and quality through filtering, QC, and classification tools in Global Mapper Pro.

Summary

High-quality point cloud data is essential for accurate derivative products like DTMs and vector features. Poor positional accuracy, misclassified points, or noise can corrupt downstream analysis. Global Mapper Pro® offers a comprehensive suite of tools to edit, filter, and enhance lidar and photogrammetric point clouds before analysis or export.

Key workflows include reviewing metadata and 3D visuals for initial QC; manual, automatic, and custom classification of ground, buildings, vegetation, and utility lines; positional correction via offset, QC comparison, Fit Lidar, and image rectification tools; and flexible filtering by extent, density, noise thresholds, or classification.

Revamped display filters and thinning options help manage dense datasets, while noise isolation and class-based filtering ensure clean bare-earth models. All changes can be permanently saved by exporting the improved point cloud. Ultimately, Global Mapper Pro's robust editing and filtering capabilities guarantee that the quality of the input directly elevates the reliability of every geospatial output.

 Read full story at Blue Marble Geographics Blog


Industry News


In Government

Critical Infrastructure Security: What State and Local Leaders Need to Know

by  Mickey McCarter

Here's how utilities and their public sector partners can strengthen electric grid security.

Summary

Critical infrastructure security, particularly for the electric grid, is a growing concern due to the expansion of attack surfaces from retrofitting aging systems with networked computing power.

This creates gaps in security, exacerbated by inconsistent governance and a lack of resources for robust cybersecurity programs. Best practices for securing critical infrastructure include designing for security, implementing secure-by-design principles in the supply chain, segmenting networks, hardening IT defenses, and continuously assessing and monitoring for anomalies.

 Read full story at StateTech


Defense Authorization Bill Includes Billions for Cyber, Intelligence Matters

by  David Dimolfetta

The NDAA notably deviates partly from President Donald Trump's national security strategy, which seeks some distance between the U.S. and Europe. It also makes a sweeping regulatory harmonization demand.

Summary

The National Defense Authorization Act for FY26 allocates billions to cybersecurity, emphasizing threats from foreign adversaries. It mandates the Department of Defense to harmonize cybersecurity regulations by June and includes measures for enhanced cybersecurity protections for senior officials' mobile phones.

The bill also addresses AI and cybersecurity adoption, requiring the development of a framework to mitigate risks associated with these technologies.

 Read full story at NextGov


What Is Generative AI? Most of the Public Sector Workforce Doesn't Know

by  Chris Teale

A recent survey found that only about a third understands the technology, and that even fewer use it daily. But a few basic approaches could change that, experts say.

Summary

Governments are increasingly adopting generative artificial intelligence to streamline procurement, compare city data, draft communications, and summarize meetings. Yet a recent SAS survey reveals a significant knowledge gap: only 37% of public sector employees understand generative AI well, and just 13% of organizations use it daily. Fourteen percent report no usage at all—the highest non-adoption rate across sectors.

Only 52% of governmental bodies have established generative AI policies, leaving many employees hesitant to experiment. Experts note that caution is typical; local governments prefer to be “first to be second,” waiting for others to navigate risks first. Without hands-on experience, workers see every use as risky.

Data privacy and security remain top concerns, while bias worries rank lower—possibly because many still underestimate how biased training data can perpetuate discrimination. Leaders urge treating generative AI like a “junior analyst”: review its output carefully, build trust gradually, and use early experimentation to inform stronger policies.

 Read full story at Route Fifty





In Technology

Is That an AI Image? 6 Telltale Signs It's a Fake

by  Elyse Betters Picaro

AI slop is everywhere, and it's getting harder to tell what's real. But these tips and tools can help.

Summary

Generative AI models are creating increasingly realistic images, coined “AI slop,” making it difficult to discern reality. However, subtle flaws remain. To detect fakes, inspect images for garbled text, anatomical errors like extra fingers, or an “uncanny” plastic-like smoothness. Other indicators include chaotic backgrounds, impossible lighting, or a distinct lack of texture in “restored” photos. Even small businesses inadvertently betray themselves with overly perfect, illustrated-looking product shots.

Beyond visual inspection, technology offers solutions. Google’s Circle to Search and Gemini app can identify AI content by detecting metadata like SynthID watermarks. While not foolproof, combining these free tools with a keen eye for physical inconsistencies is your best defense against digital deception.

 Read full story at ZDNET


JavaScript Turns 30: From 10-Day Prototype to the Backbone of the Modern Web

by  Alfonso Maruccia

From browser toy to global standard: JavaScript survived chaos, outlived Java applets, and became unstoppable

Summary

Why it matters: JavaScript, unveiled in December 1995 by Netscape and Sun Microsystems, now powers 98.9% of all websites and countless server-side and desktop projects, making it a cornerstone of the modern web despite lingering trademark disputes.

Created in a legendary 10-day sprint by Brendan Eich, JavaScript — originally called Mocha, then briefly LiveScript — was designed to add approachable interactivity to web pages. Inspired by Scheme, Self, and other languages, it was marketed alongside Sun’s Java, creating lasting confusion even though the two share almost nothing beyond the “Java” trademark.

Thirty years later, Java applets have vanished, Java thrives in enterprise backends, and JavaScript dominates client-side, server-side (Node.js), and cloud development. Standardized as ECMAScript, the language flourishes everywhere — except Oracle, which inherited the JavaScript trademark from Sun yet contributes nothing, continues to block the community from freely using the name it popularized.

 Read full story at TECHSPOT


Use Google Play? You Might Get a Cash Payout From This $700 Million Settlement Soon

by  Artie Beaty

If you made a Google Play purchase between 2016 and 2023, and your account was located in the US, you're a part of the settlement. Here's what to expect.

Summary

Google settled a $700 million lawsuit regarding the Play Store, and users who made purchases between 2016 and 2023 may receive automatic payments.

The settlement also requires Google to make it easier to access and pay for apps, including allowing alternative billing systems and third-party app installations.

 Read full story at ZDNET





In Utilities

Energy Department Launches Breakthrough AI-Driven Biotechnology Platform at PNNL

by  U.S. Department of Energy

U.S. Secretary of Energy Chris Wright launched a new chapter to secure American leadership in autonomous biological discovery yesterday alongside scientists and private partners at Pacific Northwest National Laboratory.

Summary

U.S. Secretary of Energy Chris Wright commissioned the Anaerobic Microbial Phenotyping Platform (AMP2) at Pacific Northwest National Laboratory (PNNL), heralding a major advance in America’s pursuit of leadership in autonomous biological discovery.

Developed in partnership with Ginkgo Bioworks, AMP2 is described as the world’s largest autonomous-capable system for anaerobic microbial experimentation. By integrating artificial intelligence and robotic automation, the platform will dramatically accelerate the identification, cultivation, and optimization of microbes—shrinking timelines from years to days or weeks.

“By launching AI-enabled, autonomous platforms like AMP2, our DOE National Laboratories are driving scientific breakthroughs faster than ever before,” Secretary Wright declared.

 Read full story at Energy.gov


Google, NextEra Energy to Develop Data Centers With Power Plants on Site

by  Andy Peters

Tech companies are increasingly on the lookout for power supplies for data centers

Summary

Google and NextEra Energy, America’s largest renewable utility, announced Monday a landmark partnership to co-develop gigawatt-scale data center campuses with dedicated on-site power plants, tackling the industry’s acute electricity shortage.

The agreement targets multiple new facilities where generation capacity will be built alongside the data centers themselves, bypassing strained public grids. Neither company disclosed locations or financial details.

The move addresses surging power demand driven by e-commerce, streaming, and especially artificial intelligence, which requires far more electricity than traditional computing. Industry experts warn that insufficient grid capacity, financing, and land availability threaten data-center expansion nationwide.

The pact builds on prior Google– NextEra collaborations, including efforts to restart an Iowa nuclear plant and expand existing data centers.

 Read full story at Costar


Suspicious Minds — Separating Hype from Reality on Data Center Power Demand

by  Lisa Shidler

All the speculation you hear about the future of data centers comes with the promise of massive amounts of electricity usage down the line. But which facilities are using the most grid power right now?

Listen (14:48)

Summary

The largest U.S. data centers, many built over a decade ago, are consuming significant electricity, with the top 11 sites using an estimated 3,000 MW. Google's Council Bluffs, IA, complex leads with a speculated 500-600 MW, followed by Microsoft's Quincy, WA, site at a similar level.

While data on exact power usage is scarce, these facilities are major grid consumers, often exceeding the power needs of millions of homes.

 Read full story at RBN Energy




Unsubscribe from The Harlow Report-GIS

As an Amazon Associate, I earn from qualifying purchases.