Some rural pharmacies looking at AI‐Enabled Telehealth Solutions

Voice of Alexandria reports..

Independent Pharmacy Cooperative (IPC) announces a strategic partnership with Doctronic to help independent pharmacies expand access to AI‐enabled telehealth. Building on IPC’s Digital Health investments, this collaboration supports care models that prioritize convenience, speed, and trusted support close to home, while keeping pharmacies central to the patient relationship.

Through this partnership, IPC and Doctronic will offer member pharmacies a practical on-ramp to digital care. Doctronic’s platform streamlines AI-assisted intake and connects patients to licensed clinicians, helping pharmacies expand access to care without losing the community connections patients rely on. For more, visit: https://www.ipcrx.com/digital-health-for-independent-pharmacies-2.

” AI is everywhere, and it can feel overwhelming,” said Kate Helf, VP of IPC Digital Health. “We see AI‐enabled telehealth as a foundational tool, we’ll continue to build on, supporting independent pharmacies as they expand access to care while staying central to the patient relationship.”

In many rural and underserved communities, independent pharmacies are often the most accessible healthcare touchpoints. Enabling digitally supported care options through the pharmacy, IPC and Doctronic aim to help close gaps in availability, strengthen continuity of care, and expand the support patients can receive, regardless of geography.

How do job-seeking kids feel about AI? Not great

Axios reports

America, we have a problem: Young adults are scared and unprepared for the AI revolution upending their early career choices and prospects.

  • They tell pollsters they’re frightened, even angry, about AI’s fast arrival. They’re rightly unnerved by a tough job market for college grads. And most aren’t remotely equipped by schools to be AI-savvy.

Why it matters: This is a growing problem for just about everyone — kids, educators, employers and politicians.

  • The youngest, most technologically native age group should be among the biggest cheerleaders and beneficiaries of AI. They aren’t. If anything, their feelings are growing more sour.

By the numbers: Gen Z’s excitement about AI dropped 14 points over the last year to just 22%, according to Gallup polling released last week. Hopefulness about the technology fell nine points to 18%, while anger rose nine points to 31%.

AI’s impact on unemployment rate is real but minimal and mixed

Axios reports

The impact of AI on the job market is starting to show up in the data analyzed by Wall Street firms — so far it’s pretty modest, but certainly real.

What they found: AI has both created and destroyed jobs over the past year.

  • It reduced employment in occupations that are easily substituted by AI, translating to a slight 0.16 percentage point increase in the unemployment rate.
  • At the same time, AI decreased unemployment by 0.06 point in jobs that are “augmented” by AI — roles that rely on things that machines cannot replace, like human judgment, interpersonal interaction and accountability.

Zoom out: Overall, AI raised the unemployment rate by just 0.1 percentage point, they find.

The article goes on to explain that there are two sides to AI. It was help some professions and eliminate others…

The bottom line: AI’s impact on the labor market is small so far, and it’s more complicated than the doomers want you to think.

EVENT Mar 27: AI Literacy Day webinar spotlighting AI & workforce digital skills

From the National AI Literacy Day

Nationwide Day of Action: Professional Development Webinars

➤  10 AM EST: Flight Path 2030: A Principal’s Blueprint for Building AI-Ready High Schools

➤  11:30 AM EST: Student Voices on AI: National AI Literacy Day Town Hall

➤  1 PM EST: Engaging Parents and Families in AI Tools

➤  2 PM EST: AI Literacy in Action: The Path to the AI-Ready Graduate

➤  3 PM EST: SAFE and Connected: Negotiating for an Ethical, Interoperable, and Human-Centric AI Future

➤  4 PM EST: What AI Changes (and What It Doesn’t) A Framework for Educators

Access each webinar registration link here: https://tinyurl.com/NAILDPD2026

New MN Bill introduced: providing requirements for artificial intelligence chatbot technology HF4452

I am going to try to at least track the bills that get introduced that are at all related to broadband and/or broadband use. I may not follow all closely. Click the bill number for more info and updates:

From the MN House:

Finke, Koegel, Sencer-Mura, Curran, Moller, Acomb, Jordan and Youakim introduced:

H. F. 4452, A bill for an act relating to civil law; providing requirements for artificial intelligence chatbot technology; creating a cause of action for harm caused by artificial intelligence chatbot use; proposing coding for new law in Minnesota Statutes, chapter 604.

The bill was read for the first time and referred to the Committee on Commerce Finance and Policy.

MN lawmakers are proposing bills to regulate access to artificial intelligence

Dakota News Network reports

Minnesota state lawmakers are proposing bills to regulate access to artificial intelligence. One proposal is to ban children from using AI chatbots. It would also prohibit health insurers from using AI to determine if a procedure is medically necessary. A separate bill would ban the use of AI algorithms to set different prices for the same goods and services for different consumers. Both Republicans and Democrats alike at the Minnesota State Capitol believe that in the absence of federal regulations, states must create their own.

Press Conference: MN Lawmakers Introduce Legislation to Regulate Artificial Intelligence

Earlier today, Senator Erin Maye Quade, DFL-Apple Valley, legislators, and supporters held a press conference to introduce bills intended to regulate artificial intelligence for consumer protection and public safety. It’s a great peek at what might be happening later today and for the rest of the session. Reporters asked questions that many of us might ask – such as what does that mean in the real world.

Here are some of the sessions that will be happening today. You can watch in real time or view the archive later:

S.F. 1120 Maye QuadeGovernment entities prohibition from requesting or obtaining reverse-location information

scs1120a-1.pdf

ACLU-MN-Letter-of-Support-Reverse-Warrants-SF-1120.pdf

BCA-Opposition-to-SF1120-3-5-26-Signed-3-5-26.pdf

ILCM-SF1120-Pro.pdf

MCPA-_SF1120_Letter-of-Opposition.pdf

Reverse-Warrant-Flyer-SF-1120.pdf

20260302134430681_25-112-Google-Chatrie-Amicusfinal.pdf

S.F. 1856 Maye QuadeUsage of artificial intelligence in the utilization review process prohibition provision

scs1856a-1.pdf

S.F. 1857 Maye QuadeMinor access to chatbots for recreational purposes by persons prohibition provision

scs1857a-2.pdf

MFC-SF1857-Pro-Senate-Judiciary-and-Public-Safety-Committee-03092026.pdf

S.F. 1886 Maye QuadeIndividual communication with artificial intelligence disclosure requirement provision

RMAI-Memorandum-in-Opposition-to-SF1886-03-09-2026.pdf

S.F. 3098 Maye QuadeProhibition from using artificial intelligence to dynamically set product prices

scs3098a-1.pdf

MN for Open Government AI Regulation Presentation

MN_for_Open_Government_AI_regulation_presentation.pdf

Midwest FiberPath to build multi-conduit long-haul backbone to support AI

Telecompetitor reports

Midwest FiberPath says it will build a 1,200-mile multi-conduit long-haul backbone intended to support the increased traffic created by artificial intelligence (AI) in east-west and north-south directions in the Midwest. It will provide what it describes as next-generation carrier mesh diversity.

More details…

The long-haul topography will have three primary corridors:

  • Joliet, Illinois to Des Moines, Iowa to Council Bluffs, Iowa: This multi-conduit route will support hyperscale east-west traffic fabrics between Chicago interconnection ecosystems and numerous Iowa compute campuses.

  • Minneapolis, Minnesota to Des Moines, Iowa to Kansas City, Missouri: This corridor, also multi-conduit, will run north-south and enable regional mesh diversity and alternative long-haul routing across the central U.S.

  • Minneapolis, Minnesota to Cedar Rapids, Iowa to Joliet, Illinois: This will be a diagonal extension reinforcing Iowa as a center-node aggregation point for multi-directional traffic exchange.

New: MN Report of the Technology Advisory Council: cybersecurity, AI, data sharing and production management

Minnesota has a Technology Advisory Council (TAC). The release an annual report. For someone (like me) who attends all of the MN Broadband Task Force meetings, it’s a next step of sorts of looking to what’s coming toward us and how the state can maximize benefits and minimize risk. Also, from someone who attended the broadband meetings, the discussions happening at the TAC shine a light on the need for ubiquitous broadband. Here’s the executive summary…

Technology shapes how Minnesotans access essential government services — from childcare and healthcare to public safety, licensing, and regulatory oversight. As expectations for speed, security, and transparency rise — and as cyber threats, artificial intelligence, and federal funding uncertainty intensify — Minnesota must modernize in ways that deliver clear public value while protecting privacy, security, and public trust.

The legislature established the TAC in 2021 to provide strategic guidance to MNIT and executive branch agencies on enterprise technology priorities. Drawing on expertise from across the public and private sectors, the TAC helps the state reduce systemic risk, modernize responsibly, and align technology investments with legislative intent and statewide goals. In 2025, the TAC focused on strengthening the enterprise foundations required for effective, accountable government. Building on prior recommendations, the TAC emphasized governance-driven approaches that move Minnesota beyond isolated projects toward durable, scalable capabilities. Across all focus areas, a consistent theme emerged: Lasting public value depends on shared standards, coordinated execution, and sustained investment in people, data, and security. The TAC’s work in 2025 centered on four priority areas:

Advancing responsible artificial intelligence

Minnesota continued to lead in responsible AI adoption by strengthening enterprise governance, shared standards, and workforce readiness. Rather than pursuing AI for its own sake, agencies applied AI to clearly defined use cases that improve efficiency and decision-making while maintaining transparency, auditability, and alignment with Minnesota values.

Reinforcing cybersecurity and operational resilience

In response to an evolving threat landscape — including emerging risks such as quantum computing — and shifting federal support, the TAC prioritized a whole-of-state cybersecurity model. This approach emphasizes shared intelligence, coordinated response, and workforce development to reduce risk and protect critical services across state, local, Tribal Nations, and critical infrastructure partners.

Strengthening data sharing and evidence-based decision-making

The TAC emphasized the need for a coordinated, enterprise approach to data stewardship and sharing. Stronger leadership, clearer legal frameworks, and improved data quality enable agencies to collaborate more effectively, reduce duplication, and deliver faster, more seamless services — while protecting privacy and security.

Modernizing service delivery through product and experience

Recognizing that human-centered services depend on strong product and agile practices, the TAC advanced recommendations to modernize procurement, funding models, leadership engagement, and workforce capacity — shifting government from project completion to sustained value delivery.

The report goes on to provide recommendations for each area.

Why Microsoft’s “Community-First” AI Data Center Promise Isn’t the Full Story

AI data centers have been a big topic for many rural communities in Minnesota, such as Farmington, Hermantown, North Mankato and more. I was interested when I came across a podcast on AI data center in black neighborhoods from The Miseducation of Technology. The issues sound similar to those I’ve heard in rural Minnesota. The recommendations are also similar. But sometimes it’s easier to see the issues and recommendations more clearly when we’re not talking about our park or our water bills.

Here’s a description from and link to the podcast itself

In this episode of The Miseducation of Technology, Attorney Danielle A. Davis breaks down what’s really behind Microsoft’s new “community-first” promise on AI data centers—and why that announcement didn’t come out of nowhere.

The conversation starts where most tech policy discussions don’t: with culture.

In 2025, R&B singer SZA publicly questioned the environmental cost of AI—calling out energy use, pollution, and why Black cities like Memphis keep ending up on the receiving end. What sounded like a celebrity tweet was actually a warning rooted in lived experience.

Because while AI is often sold as “cloud-based” and abstract, for many Black communities it is physical, loud, and permanent—arriving in the form of massive data centers that consume enormous amounts of power and water, strain local grids, and reshape land use with little community input.

So why did Microsoft suddenly promise to:

• Cover electricity costs
• Reduce and replenish water use
• Stop asking for tax breaks
• Invest in local training and education

And more importantly—what does that actually solve… and what does it leave untouched?

Which jobs and workers are most and least able to thrive with AI?

Brookings Institute talks about research that looks at workers’ ability to adapt if job loss does occur…

In short, the new analysis asks: If AI does cause job displacement, who is best positioned to adapt, and who will struggle most? In asking those questions, this analysis intends to help policymakers focused on AI’s labor market impacts better target their attention and resources.

I thought this might be of interest to policymakers, anyone with workforce development and anyone with a job…

Overall, this analysis offers a more nuanced picture of AI’s possible impacts on workers than AI exposure measures can on their own.

Specifically, the analysis focuses on understanding the degree to which workers in different highly exposed occupations could manage a job transition after involuntary displacement. In doing so, it makes clear the existence of both large zones of strong resilience to job loss across the workforce as well as concentrated pockets of heightened vulnerability if displacement were to occur.

Given this, the report likely has practical use for workforce and employment development practitioners because understanding where workers are most and least resilient to AI-driven labor market change may help inform the optimal use of public funding for workforce adjustment programs.

The research is interesting and can be highlighted in the graphics below…

Visit the article for access to the interactive maps of communities with the largest share of jobs in high vulnerability occupations.

The Benton Institute looks at model legislation for a People-First Model Chatbot Bill

The Benton Institute for Broadband & Society reports

This week, the Consumer Federation of America (CFA), the Electronic Privacy Information Center (EPIC), and the nonprofit Fairplay released model legislation for a People-First Model Chatbot Bill. The People-First Chatbot Bill intends to give lawmakers a straightforward approach to address the harms caused by artificial intelligence (AI) chatbot products developed and deployed by tech companies with little oversight or transparency. Rather than outlawing chatbots, the model bill provides a workable, clear framework to encourage the development of safer technology.

Why are they doing this?

Recent lawsuits show that chatbots can cause devastating harm to people of all ages, including both children and adults. This model bill endeavors to make them safer for everyone.

The bill looks at several aspects of Chatbot…

The People-First Chatbot Bill is organized into a number of sections, each tackling a different facet of chatbot use, privacy protections, transparency requirements, and bill implementation:

  • Data Privacy and Security

  • Transparency for Users

  • Safety by Design: Assessments and Transparency Requirements

Fiber supply threatens US broadband targets

Light Reading reports

Warnings about a US fiber crunch that could slow down broadband deployment have intensified since the summer. In August, Incab America, a Texan maker of fiber-optic cable, notified customers that “a significant fiber shortage is emerging” in a statement signed by Mike Riddle, its president, who blamed data centers for “sucking up all the fiber production capacity.” The situation reminded him of 2000, when lead times lengthened to a year. They have now risen to the same level, said a separate industry source who requested anonymity.

That compares with normal lead times of between eight and 12 weeks, according to the same source. Even when there is some tightness in the supply chain, they never usually exceed 15 to 20 weeks, he said. But a wave of investment in data centers, built to train AI’s large language models (LLMs), has quickly gobbled supplies of glass and other materials used in fiber-optic cables. “The three leading glass manufacturers in the United States are experiencing challenges in meeting this heightened demand,” observed Riddle in August. “Notably, one manufacturer has already sold all of its fiber inventory through the year 2026.”

Policies may also have an impact…

Yet surging demand from AI data centers is not the only problem. Sourcing components from overseas has also become harder because of the tariff restrictions Trump has slapped on imports of foreign goods. There is some industry frustration, too, about the need to comply with the rules of the Build America, Buy America (BABA) Act signed into law by Joe Biden, Trump’s White House predecessor, in November 2021.

Under BABA’s provisions, initiatives are ineligible for government financial aid “unless all of the iron, steel, manufactured products, and construction materials used in the project are produced in the United States.” That has ramifications for companies participating in the Broadband Equity Access and Deployment (BEAD) program, which draws on government funds to extend network coverage into hard-to-reach and underserved communities.

How Trump Executive Orders shape Federal AI regulation and override State actions

The Benton Institute for Broadband & Society outlines how Trump Executive Orders shape federal AI regulation and override State actions. They outlines actions, plans and deadlines as well as a a very quick summary…

President Trump’s AI policy represents a distinctive approach: the U.S. government will be an active participant in advancing AI technology while adopting light federal regulation focused on content standards for government-purchased AI, combined with aggressive federal preemption of state regulation. Rather than creating extensive federal rules for private AI companies, the Administration is working to prevent states from creating such rules while investing heavily in federal AI development through initiatives like the Genesis Mission.

This creates a framework in which AI companies face minimal regulatory requirements from any level of government, with the primary federal interventions being procurement standards for AI systems used by federal agencies and efforts to establish a unified national framework that supersedes state authority.

AI Governance Checklist for Elected Officials from The Center for Democracy and Technology

I love a good checklist. Even if you may never need the checklist, I think looking over it gives you a good idea of how something works and what’s involved. The Center for Democracy and Technology has created a check list for AI use in government

This brief provides elected officials and senior leaders working in state and local government with a checklist of core recommendations specifically aimed at building government-wide structures, strategies, and processes to advance trustworthy and responsible use of AI in public benefits and services across five core areas:

  • Public Transparency and Stakeholder Engagement: Improve public awareness and understanding of AI by establishing public AI inventories, prioritizing public education about government use of AI, creating advisory councils with members of the public to inform agency AI decision-making, implementing mechanisms for meaningful feedback from the public, and instituting plain-language notices and explanations for affected individuals.
  • Accuracy and Reliability: Ensure that AI projects advance agency goals and combat AI-driven challenges by adopting acceptable AI use policies or guidelines, grounding the acquisition and use of AI tools in evidence-based decision-making, establishing minimum government-wide AI performance and testing standards and procurement criteria, implementing regular independent audits of AI tools (including post-deployment), building in requirements for human oversight and training, and prioritizing investment in AI talent.
  • Governance and Coordination: Promote cross-agency governance practices by adopting a government-wide AI plan and governance strategy, appointing a chief AI officer or equivalent senior leader, creating AI governance boards, establishing centralized emergency response protocols and AI incident reporting, engaging cross-functional staff in AI decision-making, establishing forums for government employees to provide input on AI projects, and incorporating responsible AI guidance into existing employee training and onboarding materials.
  • Privacy and Security: Identify and mitigate AI-related privacy and security harms by updating cybersecurity and data policies; establishing privacy and security protections in AI procurement; integrating chief privacy, information security, and data officers throughout AI decision-making; and prioritizing privacy and cybersecurity in employee AI training.
  • Safety, Rights, and Legal Compliance: Address the risks that AI systems may pose to the public’s safety and rights by integrating civil rights, risk, and legal officers throughout AI decision-making; establishing heightened risk management requirements for high-impact uses; and prioritizing legal compliance and identification and mitigation of AI harms in employee AI training.