Home » Health » Pediatric Warts: Doctor’s Advice for Parents

Pediatric Warts: Doctor’s Advice for Parents

by Dr. Jennifer Chen

by Carolena‌ Steinberg and Diana⁢ H.Lee

‌ Credit: Pixabay/CC0 Public Domain

Warts are small, ​firm bumps on‌ the skin caused by viruses from the human papillomavirus (HPV) family. Warts are common among school-aged children but ‍can affect people of any age.

The good news is, many kinds ⁤of warts often go ⁢away on their own without treatment. But they can become painful if they are bumped, and some children are embarrassed by them. A pediatrician can give ⁤advice on treatment, such as applying an⁤ over-the-counter medicine containing salicylic acid to the ⁢warts.

There are a variety of warts that a child can come into contact with and they are contracted through different methods. Human papillomaviruses are spread by⁢ close ⁢contact through direct⁤ touch or sharing objects. The virus often gets into‌ the body through breaks in the skin.

Common skin warts are ⁣bumps ‍with a rough surface ​and a yellow, tan, black, brown or gray color. They​ can appear ‍anywhere on the body. Tho,they are most frequently enough found on the hands,including near or under the fingernails,toes,face and around the⁤ knees.

When warts are on the bottom of the feet, doctors call them ⁤plantar warts. Plantar wars are often ​flat and painful. The child may say it feels like they are walking on a pebble. The​ warts may have tiny red or black dots on them, which are‌ actually tiny, swollen⁢ or ‌dead blood vessels. Warts on the​ bottom of the feet may happen from⁢ walking barefoot in locker ‍rooms or around pools.

Warts on the genitals, also called condyloma, are usually spread sexually during‍ genital, oral and anal sex with a⁢ partner who is infected. ⁤However,skin warts can also be sprea“`html




The Future of⁢ AI Regulation in the US ​(2026)

AI Regulation in the ⁣US: A 2026 ‍Update

As of January 19,2026,the United States operates under a complex and ⁢evolving landscape of AI regulation,characterized by a mix ‌of ‍federal initiatives,state-level laws,and ongoing judicial review. ⁤ This report details the key developments and current status of AI oversight across the country.

Federal Regulation: The AI‍ Accountability Act of⁣ 2025

The primary federal legislation ⁢governing AI ⁤is the AI Accountability ⁤Act of 2025, signed into law on August 12, 2025. This act establishes a risk-based framework for AI systems, focusing ‍on those deemed “high-risk” due to their potential impact ⁢on civil rights, safety, ⁣and economic opportunity.

The Act created the AI ‌Safety Board (AISB), an independent ⁢agency responsible for developing technical standards, conducting audits, ⁢and enforcing compliance. The AISB’s authority extends to AI systems used in critical infrastructure, healthcare, financial services, and law enforcement. Systems ⁤classified as “low-risk” are subject to lighter oversight, primarily focused‌ on transparency ⁣and data⁤ privacy.

Example: ‍The AISB released its ⁣first set of technical standards for⁣ facial⁢ recognition technology on December‌ 15, 2025, ⁣requiring developers ⁣to demonstrate accuracy‌ across diverse demographic groups and implement safeguards against bias. AISB Facial recognition Standards ⁣(December 15, 2025)

State-Level Regulations: California, New York, and Texas

Several states have enacted their own AI regulations, ​frequently enough going beyond the federal framework. California, New york,​ and ‌Texas have emerged as leaders in ​this area.

  • California: The California Consumer Privacy Rights Act (CCPRA) was amended in 2026 to ⁢specifically address AI-driven profiling and automated⁣ decision-making. CCPRA Amendment – AI Profiling (january 1, 2026) Consumers now have the right to access details about the logic behind AI-powered decisions that affect them.
  • New York: New​ York City passed the algorithmic Accountability Law in July 2025, requiring companies to conduct​ bias audits of⁢ AI systems used in ‍hiring and promotion decisions. NYC Algorithmic Accountability Law.
  • Texas: Texas focused on AI​ in ‌education, passing the Student Data Privacy and AI Transparency Act in November 2025. Texas Student​ Data Privacy Act (November 15, 2025) This law requires schools ‌to disclose the use of AI-powered tools in classrooms and obtain parental consent for data collection.

Judicial Review and Legal Challenges

The AI Accountability Act of 2025 and state-level regulations have faced numerous legal challenges, primarily from industry groups arguing that the laws are overly broad and stifle⁣ innovation.

The Supreme Court heard‍ arguments in Data Innovations v. ⁤AISB on November 8, 2025, concerning the⁤ AISB’s authority to compel audits of proprietary ⁣AI algorithms. Supreme Court Docket – ⁢Data Innovations⁤ v. AISB The ⁤Court’s decision, expected in March 2026, will ​substantially shape the scope of federal AI regulation. A key point of contention is whether the AISB’s audit requirements constitute an unreasonable search under the Fourth Amendment.

Example: The Ninth⁤ Circuit Court‍ of Appeals ruled ⁤in favor of the state of california in TechForward v. California ‍ on January ​5, 2026, ⁢upholding the state’s right to regulate AI-driven profiling. Ninth Circuit Ruling – TechForward v.california (January 5,2026) The court found that the state’s interest in‍ protecting consumer privacy outweighed the ‍industry’s claims of economic harm.

The Role of NIST and AI ⁣Standards Development

The National‌ Institute of Standards and Technology (NIST) continues⁤ to play a crucial role in developing voluntary AI ⁤standards and best practices.

NIST’s AI Risk ⁢Management Framework (AI ⁢RMF) 1.0, released in February 2023, provides a structured approach for⁢ organizations to identify, assess, and mitigate AI-related risks. NIST AI Risk Management Framework 1.0 ⁤While not ‌legally binding, the AI RMF is widely adopted by companies seeking to demonstrate responsible AI development and ⁢deployment. NIST is currently working on AI RMF 2.0, expected to‍ be released in late 2026, which will incorporate lessons learned from the implementation of the AI Accountability Act.

Example: The Department of Defense adopted the NIST AI RMF as the foundation for its own AI ethics guidelines in June 2024.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.