top of page

Child Psychiatrist /Adult Psychiatrist

Search Results

680 results found with an empty search

  • Early Childhood Stress Linked to Increased Atopic Dermatitis Activity

    TOPLINE: Common childhood stressful life events like starting a new school, moving homes, or having a new sibling are associated with an increased risk for atopic dermatitis (AD) activity and severity in children, a study suggests. METHODOLOGY: Researchers conducted a longitudinal cohort study of 13,972 children aged 1-8.5 years from the Avon Longitudinal Study of Parents and Children in the United Kingdom; 4454 had AD. Primary caregivers completed standardized questionnaires on about 15-17 age-appropriate stressful life events over the past year at 7 timepoints (18, 30, 42, 57, 69, 81, and 103 months). Study outcomes were prevalence and severity of AD. Median follow-up was 81 months from birth. Those with AD were more likely to be women, have a higher socioeconomic status, and have mothers who experienced more prenatal stress. TAKEAWAY: The annual period prevalence of AD ranged from 18% to 21%, and 5% to 8% of individuals with AD reported having “quite bad” to “very bad disease” at any given timepoint. The most common stressful life events were starting school (91%), starting a new school (75%), pet dying (54%), moving homes (54%), and having a new sibling (45%). Each SD increase in stressful life events was associated with a 4% increase in the odds of active AD (odds ratio [OR], 1.04; 95% CI, 1.01-1.07), with the highest risk for children with moderate to severe AD (OR, 1.13; 95% CI, 1.03-1.23). Of all life events, starting school was associated with a lower risk for active AD (OR, 0.93; 95% CI, 0.88-0.98). Each SD score increase in cumulative stress scores was associated with a higher risk for active AD (OR, 1.11; 95% CI, 1.07-1.16) and severe disease (OR, 1.17; 95% CI, 1.05-1.31). IN PRACTICE: "In a longitudinal, population-based study, we found associations between the perceived impact of childhood stressful life events and AD," the authors wrote. The results “suggest that parents and providers may anticipate and proactively moisturize or treat to prevent potential AD flares around life events," they added, noting that stress-reducing modalities may also be helpful. SOURCE: The study was led by Katrina Abuabara, MD, MA, MSCE, Department of Dermatology, University of California, San Francisco, and was published online on January 28 in the Journal of Investigative Dermatology. LIMITATIONS: Study limitations included potential measurement error and bias in both exposure and outcome. The stressful life events scale used was not independently validated. Caretaker-reported AD activity and severity may have introduced bias. Validated AD severity scores were unavailable for key timepoints. Potential missing data and selection bias were also a concern. DISCLOSURES: The study was supported by the National Institutes of Health and the Wellcome Trust. Abuabara reported receiving consulting fees and grants from pharmaceutical companies. Another author received grants and fellowships from various government organizations; other authors had no disclosures. Note: This article originally appeared on Medscape .

  • Loneliness During Adolescence Linked to Health Outcomes in Adulthood

    TOPLINE: Increased loneliness during adolescence is associated with various health and well-being outcomes, including an increased risk for asthma and depression, in adulthood. METHODOLOGY: Researchers conducted a longitudinal study to examine the association between increased loneliness over 1 year during adolescence and its subsequent health and well-being outcomes in adulthood. They utilized data from a nationally representative sample involving 20,745 US adolescents (mean age, 15.04 years; 49.7% girls; 65.98% White) initially surveyed in 1994-1995, of whom 11,040 and 9003 were analyzed over average follow-up periods of 11.37 and 20.64 years, respectively. Loneliness in adolescents was assessed at prebaseline (1994-1995) and baseline (1996) through the self-reported frequency of feeling lonely during the past 7 days, with response options ranging from "never" to "most of the time." The associations between changes in loneliness and 41 health and well-being outcomes, including dimensions of physical health, health behavior, mental health, psychological well-being, social factors, and civic and prosocial behaviors, were evaluated. TAKEAWAY: Increased loneliness during adolescence was associated with an increased likelihood of asthma (relative risk [RR], 1.24; P = .041) and depression (RR, 1.25; P = .010) diagnosis in adulthood. Increased loneliness was also associated with an increased likelihood of posttraumatic stress disorder diagnosis (odds ratio, 1.84; P = .002). Increased loneliness was associated with worse social outcomes, such as reduced quality of romantic relationship and increased perceived discrimination, and worse psychological well-being outcomes, such as decreased happiness, job satisfaction, and optimism. No significant associations were found between increased loneliness and health behaviors or civic and prosocial behaviors. IN PRACTICE: "Our study likely captures early risk indicators for adverse health outcomes, highlighting pathways through which loneliness may gradually influence health outcomes over time," the authors wrote. "Our findings suggest that ongoing development and application of interventions and policies aimed at reducing loneliness is a promising method of reducing the risk of some adverse health/well-being outcomes for our adolescent and emerging adult populations," they added. SOURCE: This study was led by Eric S. Kim, PhD, of the University of British Columbia in Vancouver, British Columbia, Canada, and Renae Wilkinson, PhD, of the Institute for Quantitative Social Science at Harvard University in Cambridge, Massachusetts. It was published online on January 20, 2025, in the Journal of Adolescent Health. LIMITATIONS: This study acknowledged potential confounding by unmeasured variables and reverse causality, though these were considered unlikely. A single-item measure for assessing loneliness limited the ability to differentiate between the types of loneliness. The use of data for some outcomes in two separate waves for analysis potentially introduces a 10-year gap in the measure of some outcomes. DISCLOSURES: This study was supported by grants from the Michael Smith Health Research BC, the Canadian Institutes of Health Research, and the John Templeton Foundation. One author reported receiving licensing fees from Flerish, Inc. and Flourishing Metrics. Note: This article originally appeared on Medscape .

  • Adult ADHD Diagnosis Linked to Earlier Death

    A diagnosis of attention-deficit/hyperactivity disorder (ADHD) in adults was associated with a 7-year reduction in life expectancy, on average, compared to the general population, findings from a large study show. Compared with peers without ADHD, males with ADHD in the matched retrospective UK study died an estimated 7 years earlier, and females with the diagnosis died around 9 years earlier. "We believe that this is unlikely to be because of ADHD itself and likely caused by modifiable factors such as smoking, and unmet mental and physical health support and unmet treatment needs," the authors wrote, adding "The findings illustrate an important inequity that demands urgent attention." The findings were published online January 23 in The British Journal of Psychiatry . Consequences of Impaired Executive Function Previous research has shown that adults with ADHD experience more unemployment, financial problems, contacts with the criminal justice system, and homelessness than those without the condition. ADHD has also been linked to a higher risk of suicide, and an earlier meta-analysis of eight studies found that people with ADHD are twice as likely as those in the general population to die prematurely. They are also more likely to engage in risky behaviors, such as smoking, drug use, and drinking, investigators noted. To learn more about ADHD and life expectancy, investigators examined electronic medical records from 794 UK primary care practices. The analysis included data on 30,039 adults diagnosed with ADHD at any point in their lives and 300,390 matched controls. The primary outcome was all-cause death. In the UK, primary care offices are updated with patients' deaths by the National Health System Personal Demographic Service. Investigators used a Poisson model to estimate the mortality rate by single year-of-age for those with ADHD and the control group. They then used the modelled rates to estimate life expectancy at age 18 years using the period life table method as described by the Office for National Statistics. The median age at cohort entry for males was 18.95 years and 22.10 years for females. Shorter Lifespans Mortality rates were higher in the ADHD group (males, 0.83%; females, 2.22%) compared to the control group (males, 0.52%;females, 1.35%). Among those with ADHD, death was 1.89 times more likely in men (95% CI,1.62-2.19) and 2.13 times more likely in women (95% CI, 1.79-2.53) during follow up compared to controls. ADHD was associated with a reduction in total life expectancy of 6.78 years in males (95% CI, 4.50-9.11) and 8.64 years in females (95% CI, 6.55-10.91). Average age at death for the ADHD group was 73 years in men (95% CI, 71.06-75.41) and 75 years (95% CI, 72.99-77.11) in women. For the control group, average age at death was 80 years (95% CI, 79.34-80.74) for men and 84 years for women (95% CI, 83.12-84.44). The authors called the findings "extremely concerning," adding that for individuals with ADHD, there are often associated mental health challenges, including substance use, smoking, or compulsive behavior that may contribute to premature death. “Only a small percentage of adults with ADHD have been diagnosed, meaning this study covers just a segment of the entire community," lead author, Elizabeth O’Nions, PhD, epidemiologist at the Bradford Institute for Health Research, University College of London, said in a press release. “More of those who are diagnosed may have additional health problems compared to the average person with ADHD. Therefore, our research may over-estimate the life expectancy gap for people with ADHD overall, though more community-based research is needed to test whether this is the case,” she continued. Study limitations include the lack of information about cause of death and wide confidence intervals around certain point estimates, likely due to the relatively small number of participants with ADHD. In addition, the findings are probably not generalizable to other countries, settings, or time periods, the authors wrote. Experts Weigh In Several experts who were not part of the study weighed in on the findings in a statement from the UK-based nonprofit and independent Science Media Centre. Philip Asherson, PhD, professor of molecular psychiatry at King's College London, said the study illuminated the impact of an ADHD diagnosis on life expectancy. While the causes of early death are not yet confirmed, he noted, ADHD has been linked to cardiovascular disease and cancer and may be linked to autoimmune and other physical health disorders. "ADHD is increasingly recognized as a serious condition in adults associated with poor health outcomes," Asherson said. Of particular concern are limited access to diagnosis and treatment including psychosocial support, he said, adding, "until this is addressed, the shorter life expectancy demonstrated in this study is likely to continue.” Also commenting on the study was Oliver Howes, PhD, MBBS, professor of molecular psychiatry, Institute of Psychiatry, Psychology & Neuroscience, King’s College London. “The study adds to lots of other evidence that people with other mental illnesses die sooner than people without mental illness to show this for ADHD as well," he said. The study's use of a large UK database is a strength, but a limitation was that investigators were unable to study how participants' ADHD diagnosis date was related to comorbid conditions or treatment efficacy. "More work is needed to understand what underlies the link between ADHD and premature death,” he said. Note: This article originally appeared on Medscape .

  • Add Anxiety and Depression to CVD Risk Prediction Model?

    Adding measures of anxiety and depression to the American Heart Association’s Predicting Risk of Cardiovascular Disease Events (PREVENT) model may have little additional effect on predicting the risk for new cases of cardiovascular disease (CVD) at the population level, according to new research. Yet numerous studies have noted the associations between these mental health conditions and an increased risk for CVD, the authors wrote, so other tools, broader mental health conditions, or diagnostic interview data could be useful in future studies. “The additive values of these mental health measures were lower than we expected. New CVD risk factors don’t always improve the existing CVD risk scores meaningfully, as shown by a previous study, which investigated the additive values of apolipoproteins,” said lead author Shinya Nakada, a PhD student at the University of Glasgow, Glasgow, Scotland, who researches mental health and CVD. “However, new predictors with limited additive values may still offer practical advantages when it is reproducible, noninvasive, and less costly,” Nakada said. “The measures used in this study were drawn from real-world settings, such as electronic health records and short self-reports, since they are of lower cost and can be implemented relatively easily.” The study was published online on January 13 in the Canadian Medical Association Journal . Little Additional Effect In 2024, PREVENT model was developed to predict the risk for an initial CVD event among the general adult population aged 30-79 years in the United States. The model incorporates risk predictors relevant to cardiovascular, kidney, and metabolic diseases, including factors related to obesity, diabetes, and antihypertensive and statin medications. Because anxiety and depression are the most common mental health conditions worldwide, increasing in prevalence, and independently associated with CVD, Nakada and colleagues analyzed whether incorporating certain mental health measures into PREVENT model could detect high-risk groups that were previously overlooked. The research team developed and internally validated prediction models of 10-year risk for incident CVD (comprising coronary artery disease, stroke, and heart failure) using cohort data from the UK Biobank. They included mental health predictors such as baseline depressive symptom score, self-reported anxiety and depression, and a record-based history of anxiety and depression diagnoses . After randomly assigning more than 500,000 UK Biobank participants to a derivation set (60%) and a validation set (40%), the research team determined incremental predictive values based on C-indices, sensitivity, specificity, and net reclassification improvement indices, using a threshold of 10-year risk for incident CVD > 5%. The derivation set of 195,000 participants had 15,787 CVD events, whereas the validation set of 130,000 participants had 10,639 CVD events. In both groups, participants with a CVD event were more likely to be older, men, and current smokers. They also were more likely to have higher systolic blood pressure, diabetes, and mental health conditions . In the single-predictor models, all mental health predictors were associated with CVD. In the validation set, including all the mental health measures — except self-reported anxiety — resulted in a modest increase in the C-index and specificity, though sensitivity was unchanged. Among the mental health predictors, the depressive symptom score had the largest, though still modest, improvements in C-index (difference of 0.005) and specificity (difference of 0.89%). As a result, the model including depressive symptom score had the largest overall (1.14) and nonevent (0.89) net reclassification indices. Based on the findings, the models including depressive symptom score were more likely to distinguish those at higher risk for CVD from those at lower risk, the authors wrote. For every 1000 people at lower risk, about nine would no longer be incorrectly classified as high risk, which suggests relatively limited effects on risk classification. “In our study, although the measures of anxiety and depression were associated with CVD, including them in PREVENT had little additional effect on the risk classification of CVD at the population level,” Nakada said. “So it may not be worthwhile, especially when it is costly.” Extremely Debilitating Conditions However, new CVD risk predictors — such as anxiety and depression — could help identify targeted therapeutic pathways or actionable responses in primary care settings, the study authors wrote. For instance, the depressive symptom score may help identify previously undiagnosed depression and could be considered during CVD risk prevention visits, despite its limited contribution in the PREVENT model itself. “Anxiety and depression are extremely debilitating conditions, and in some cases can substantially interfere with treatment for CVD,” said Scott Lear, PhD, professor of health sciences and the Pfizer/Heart and Stroke Foundation chair in cardiovascular prevention research at Simon Fraser University in Burnaby, British Columbia, Canada. “In addition, some CVD risk factors likely to be affected by depression and anxiety, such as smoking and blood pressure, are already included in the PREVENT model, so this may water down the value of assessing anxiety and depression,” he said. Lear, who wasn’t involved with this study, has researched various CVD risk predictors, including by ethnicity. In this study using UK Biobank data, participants were more likely to be White, affluent, and healthy than the general UK population — and the results may not be generalizable to other populations worldwide. Future studies could look at other populations and broader mental health measures, such as anxious feelings and insomnia, and associations with CVD risk prediction, the authors wrote. “I wouldn’t want people to think that anxiety and depression don’t matter when considering CVD risk and treatment,” Lear said. “I’m sure many primary care physicians realize that.” No independent funding for the study was reported. Nakada and Lear reported no relevant financial relationships. Note: This article originally appeared on Medscape .

  • Novel Therapy Linked to Cognitive Benefit in Schizophrenia

    TOPLINE: Xanomeline/trospium chloride, a novel M1/M4 muscarinic receptor agonist, is linked to significant cognitive improvement in patients with acute schizophrenia and cognitive impairment in pooled data from two phase 3 trials. METHODOLOGY: Combined data from two 5-week inpatient phase 3 trials were used to evaluate the efficacy of xanomeline/trospium monotherapy vs placebo in 357 adult patients with acute schizophrenia. Computerized assessments of the four key cognitive domains of executive function, visual memory, sustained attention, and verbal recall/recognition were completed at screening, baseline, and weeks 3 and 5. Treatment involved flexible dosing, starting with 50 mg/20 mg twice daily for 2 days and then increasing to 100 mg/20 mg twice daily for days 3-7, with an optional increase to a maximum of 125 mg/30 mg twice daily from day 8. Participants were classified as cognitively impaired if they performed one or more SDs below healthy normative standards on baseline tests. TAKEAWAY: Xanomeline/trospium vs placebo showed no significant effect on cognitive scores in the full sample. In the cognitively impaired subgroup, patients receiving xanomeline/trospium (n = 71) had a significantly greater cognitive improvement at 5 weeks than those receiving placebo (n = 66; effect size, 0.54; P = .004). Those in the treatment group also showed the largest improvement in verbal recognition memory, with additional benefits observed for sustained attention and executive functioning. The treatment effect increased with a more stringent baseline impairment threshold of at least 1.5 SD below healthy normative standards (effect size, 0.8). Cognitive improvements showed minimal correlation with changes in total, positive, and negative symptoms in both treatment groups, suggesting independent mechanisms of action. IN PRACTICE: "Collectively, the xanomeline/trospium clinical studies reflect the first time a monotherapy for the treatment of schizophrenia has shown a replicable cognitive benefit," the lead author said in a press release. SOURCE: The study, led by William P. Horan, PhD, Bristol Myers Squibb, Boston, was published online on December 11 in the American Journal of Psychiatry. LIMITATIONS: The study design focused on acutely symptomatic inpatients, limiting the ability to isolate cognitive function changes from symptom changes. Additional limitations included the deviations from MATRICS trial guidelines, the relatively short 5-week treatment duration, the brief cognitive battery, and the lack of a functional co-primary measure. DISCLOSURES: This study was funded by Karuna Therapeutics, a Bristol Myers Squibb company. Four authors reported being employees of Bristol Myers Squibb. Full disclosures for all investigators are listed in the original article. Note: This article originally appeared on Medscape .

  • Discontinuing Antipsychotics

    Key Takeaways Antipsychotics are frequently prescribed off-label for children, despite limited long-term efficacy data and potential adverse effects, including metabolic and neurological issues. Comprehensive assessment and nonpharmacological approaches should precede antipsychotic use, with careful monitoring and consideration of tapering when possible. Short-term antipsychotic use can stabilize symptoms, but long-term use requires regular evaluation of benefits versus risks, with a focus on minimizing dosage. Medication substitution with less toxic alternatives should be considered, especially when adverse effects are present or when tapering is unsuccessful. “Bridget,” a 15-year-old girl, is referred by her foster care case manager for continued aggression. She has diagnoses of attention-deficit/hyperactivity disorder (ADHD), conduct disorder, and posttraumatic stress disorder (PTSD) related to sexual abuse she experienced in a stepfamily setting from 9 to 12 years of age. Her mother is receiving inpatient behavioral health care in an extended stay program. Bridget’s father is not available, and her 2 younger siblings are in different foster care settings. Bridget is currently prescribed risperidone 3 mg daily, methylphenidate extended-release 54 mg daily, and clonidine immediate-release 0.2 mg nightly. Approximately 1% of children aged 7 to 12 years and 1.5% of individuals aged 13 to 18 years are prescribed antipsychotic medications, according to findings from a study. These medications are sometimes US Food and Drug Administration (FDA) approved for conditions such as schizophrenia, bipolar disorder, and irritability associated with autism.2 However, approximately 65% of antipsychotic medications for children and adolescents are prescribed for off-label uses, including managing aggression, agitation, disruptive behaviors, and irritability as well as adjunct treatment for ADHD.3 Although there are limited data on the long-term effectiveness of these medications, research has shown that they can lead to a reduction in neuronal tissue and various neurological and metabolic adverse effects.4 Additionally, high doses of antipsychotics have been linked to increased mortality rates in this age group. This article expands on our previous work and discusses the appropriate duration for antipsychotic use and strategies for safely tapering or discontinuing them. First Things First: Is There a Need? Although frank psychosis and bipolar mania remain clear indications for use or consideration of dopamine receptor 2 (D2)–blocking antipsychotic medications, in these circumstances and in all other cases, a comprehensive assessment is warranted to clarify the diagnoses and options for addressing the symptoms. For example, the differential diagnosis for an agitated teenager with seemingly mixed mania may include malignant catatonia, for which antipsychotic medication is generally contraindicated, as it is likely to exacerbate the condition. This is particularly more common in patients with autism. Another more common concern is aggression associated with ADHD; Blader found that well-crafted trials of stimulant medications (eg, first trying methylphenidate and then trying dextroamphetamine) were effective for managing aggression in approximately 60% to 80% of young patients. Given the potential severity of metabolic and neurologic adverse effects of antipsychotic medications, we wrote treatment algorithms to begin with nonpharmacological approaches and treatment of co-occurring conditions before approaching the use of antipsychotics. We also made recommendations for monitoring patients during their use and, once patients are stable, looking at whether, when, and how these medications might be discontinued. How Well Do Antipsychotics Work? Antipsychotics can be beneficial for short-term stabilization, such as preventing psychiatric hospitalization, helping a student remain in a less restrictive school environment, and reducing aggression or self-harming behaviors. However, the evidence does not strongly support their use in treating severe ADHD or oppositional defiant disorder.13 Even in cases where there is FDA approval, such as for managing irritability in autism, it is important to assess whether the patient meets the criteria for antipsychotic treatment (eg, aggression, self-injury, severe mood instability) and whether other approaches might be effective. This could include addressing sensory or communication challenges or considering medications with a safer adverse effect profile. For How Long Should They Be Used? The use of antipsychotic medications in children has primarily been studied and approved by the FDA for short-term treatment, generally up to 6 months.14 There is limited research on the long-term benefits and adverse effects of antipsychotic use in children.15 Concerns commonly associated with antipsychotic medications include the following: Metabolic effects (such as weight gain, diabetes, and hyperlipidemia) Somnolence Prolonged corrected QT interval Elevated prolactin levels Extrapyramidal symptoms Neuroleptic malignant syndrome How long should you use these medications? It will not always be possible to reduce D2-blocking antipsychotic medications, because some patients have symptoms related to psychosis or affective conditions that do not respond to other approaches and where the benefits of continuing the medication outweigh the risks, perhaps keeping the patient safe and/or functioning. We recommend that you use them with an intent to reduce and discontinue their use whenever and however possible, such as when an episode of mania has passed, when you can build a more robust therapy or community plan to support the patient, or when you can substitute less toxic medications, such as antiepileptic drugs for mood stability or stimulant medications for aggression in ADHD, as noted previously. Given these considerations, how do we minimize the use of D2-blocking antipsychotic medications? Start With Discontinuation in Mind When initiating antipsychotic medication, it is crucial to consider the end goal from the start. As part of the informed consent process, discuss the intended duration of treatment with patients and their families. Important topics to cover include the following: The severity of symptoms The natural progression of the condition The child’s age The response to other psychosocial treatments Since there is no set duration for antipsychotic use in nonpsychotic conditions, it is important to monitor the frequency and severity of specific symptoms. Collaborate with patients and families to determine the desired level and duration of improvement that would justify tapering or discontinuing the medication. As a possible guideline, when we use selective serotonin reuptake inhibitors (SSRIs) for depression, we like to see at least 6 to 12 months of remission before we consider tapering and to look at any seasonal or stress-related aspects such as fall/winter effects, school-related stress, or even loss of school structure. Still, the treatment period should be as brief as possible for most clinical situations in children and adolescents. When to Reduce Antipsychotics? At each appointment, discuss the ongoing treatment plan with the patient and their family, focusing on the following: How much improvement has occurred since the initial symptoms? Are there concerning adverse effects (eg, weight gain, elevated cholesterol level, drowsiness, involuntary movements)? How well is the patient adhering to the medication regimen and required monitoring? Are the patient and family open to the idea of tapering the medication? Tapering may not be feasible for patients with primary psychotic disorders or severe mood instability or those with previous failed tapering attempts. If there are ongoing symptoms, take another look at the differential diagnosis and possible treatment options. You do not want to continue ineffective treatment with potentially severe adverse effects. In any case, if you are continuing the medication, you should document the reasons for doing so and consistently monitor for metabolic and movement-related adverse effects. Refer to the American Academy of Child and Adolescent Psychiatry’s Practice Parameter for the Use of Atypical Antipsychotic Medication in Children and Adolescents for monitoring guidelines.17 When you take over care for patients already taking antipsychotic medications, reassess the decision to continue them during your evaluation. For patients on long-term antipsychotic treatment, revisit this discussion every 6 months to evaluate whether continuing the medication remains the best option, considering its effectiveness and potential adverse effects. Begin these conversations by asking patients and families what they perceive as the benefits and drawbacks of the medication. How Slow Should You Go? Tapering does not necessarily mean discontinuing the medication entirely. Even minor dose reductions can help alleviate adverse effects such as elevated cholesterol level or weight gain and lower the risk of neurological adverse effects, such as tardive dyskinesia.18 Once you and your patient agree to taper, consider the following strategies: This is ideal for straightforward cases without co-occurring conditions or a complicated medication history. Ensure at least 3 to 6 months of stability and choose a time without new stressors. Arrange for additional support (eg, therapeutic interventions, school-based services). Reduce the dose by no more than approximately 25% of the original dose every 3 to 6 months or 5% to 10% every 2 months.19 Although there are no clear data to guide the rate of the dosage reduction in children and teenagers, adult research supports very gradual reduction over many months to 2 or 3 years to reduce relapse rates of psychosis in patients who have been on long-term antipsychotic treatment. Consider gradual weekly reductions toward the new dose (eg, recommending the patient take the current dose daily and then substituting the new lower target dose for 1 of 7 days the first week, 2 of 7 days the second week, etc, until every day is at the new lower dose). Schedule regular follow-ups to monitor for any worsening of target symptoms. Bridget is now receiving therapy for her PTSD, and you have stabilized her aggression by shifting her from methylphenidate to dextroamphetamine. However, she is gradually gaining weight, and you decide to reduce her risperidone to 2.5 mg daily using a gradual weekly reduction over 6 weeks. Is Switching Medication an Option? In many situations, you can effectively substitute the antipsychotic or reduce it substantially by using other less potentially toxic medications, such as in the following suggestions: For ADHD with aggression, consider using different kinds of stimulants or even nonstimulant ADHD medications. If there is cooccurring anxiety or irritability, consider trying SSRIs. For behavioral dysregulation or aggression associated with underlying anxiety or autism, explore options such as α agonists or β-blockers. After 6 months, you have reduced Bridget’s risperidone to 0.5 mg daily. Her aggression is well managed, she is making friends, and she is doing better academically, although her appetite and weight have not stabilized. You attempt to discontinue the risperidone altogether, but Bridget becomes far more agitated. Intraclass Medication Substitution When a patient needs to stay on an antipsychotic due to severe symptoms (such as psychosis, self-injury, or mania), consider switching to a more weight-neutral option such as ziprasidone or lurasidone (keeping in mind that insurance coverage might make these medications challenging to start). This approach is also worth considering if a previous antipsychotic trial was unsuccessful or if the patient presents with psychotic symptoms or mania. Offer this alternative if a patient experiences adverse effects that lead to discontinuation of a medication such as risperidone or aripiprazole. Bridget has stability on ziprasidone 20 mg along with the dextroamphetamine and clonidine. Screening and follow-up ECG results are normal, and her weight gain has stabilized. Bridget’s mother is in a step-down psychiatric placement facility, and the family is beginning therapy aimed at possible reunification. Concluding Thoughts Antipsychotics can be lifesaving, but they should only be used when necessary and when accompanied by continuous discussions about the length of treatment and strategies to minimize dosage and adverse effects. Note: This article originally appeared on Psychiatric Times .

  • Brain Changes in Youth Who Use Substances: Cause or Effect?

    A widely accepted assumption in the addiction field is that neuroanatomical changes observed in young people who use alcohol or other substances are largely the consequence of exposure to these substances. But a new study suggests neuroanatomical features in children, including greater whole brain and cortical volumes, are evident before exposure to any substances. Investigators, led by Alex P. Miller, PhD, assistant professor, Department of Psychiatry, Indiana University, Indianapolis, noted that the findings add to a growing body of work that suggests individual brain structure, along with environmental exposure and genetic risk, may influence risk for substance use disorder. The findings were published online on December 30, 2024, in JAMA Network Open . Neuroanatomy a Predisposing Risk Factor? Earlier research showed that substance use is associated with lower gray matter volume, thinner cortex, and less white matter integrity. While it has been widely thought that these changes were induced by the use of alcohol or illicit drugs, recent longitudinal and genetic studies suggest that the neuroanatomical changes may also be predisposing risk factors for substance use. To better understand the issue, investigators analyzed data on 9804 children (mean baseline age, 9.9 years; 53% men; 76% White) at 22 US sites enrolled in the Adolescent Brain Cognitive Development (ABCD) Study that’s examining brain and behavioral development from middle childhood to young adulthood. Researchers collected information on the use of alcohol, nicotine, cannabis, and other illicit substances from in-person interviews at baseline and years 1, 2, and 3, as well as interim phone interviews at 6, 18, and 30 months. MRI scans provided extensive brain structural data, including global and regional cortical volume, thickness, surface area, sulcal depth, and subcortical volume. Of the total, 3460 participants (35%) initiated substance use before age 15, with 90% reporting alcohol use initiation. There was considerable overlap between initiation of alcohol, nicotine, and cannabis. Researchers tested whether baseline neuroanatomical variability was associated with any substance use initiation before or up to 3 years following initial neuroimaging scans. Study covariates included baseline age, sex, pubertal status, familial relationship (eg, sibling or twin), and prenatal substance exposures. Researchers didn’t control for sociodemographic characteristics as these could influence associations. Significant Brain Differences Compared with no substance use initiation, any substance use initiation was associated with larger global neuroanatomical indices, including whole brain (β = 0.05; P = 2.80 × 10−8), total intracranial (β = 0.04; P = 3.49 × 10−6), cortical (β = 0.05; P = 4.31 × 10−8), and subcortical volumes (β = 0.05; P = 4.39 × 10−8), as well as greater total cortical surface area (β = 0.04; P = 6.05 × 10−7). The direction of associations between cortical thickness and substance use initiation was regionally specific; any substance use initiation was characterized by thinner cortex in all frontal regions (eg, rostral middle frontal gyrus, β = −0.03; P = 6.99 × 10−6), but thicker cortex in all other lobes. It was also associated with larger regional brain volumes, deeper regional sulci, and differences in regional cortical surface area. The authors noted total cortical thickness peaks at age 1.7 years and steadily declines throughout life. By contrast, subcortical volumes peak at 14.4 years of age and generally remain stable before steep later life declines. Secondary analyses compared initiation of the three most commonly used substances in early adolescence (alcohol, nicotine, and cannabis) with no substance use. Findings for alcohol largely mirrored those for any substance use. However, the study uncovered additional significant associations, including greater left lateral occipital volume and bilateral para-hippocampal gyri cortical thickness and less bilateral superior frontal gyri cortical thickness. Nicotine use was associated with lower right superior frontal gyrus volume and deeper left lateral orbitofrontal cortex sulci. And cannabis use was associated with thinner left precentral gyrus and lower right inferior parietal gyrus and right caudate volumes. The authors noted results for nicotine and cannabis may not have had adequate statistical power, and small effects suggest these findings aren’t clinically informative for individuals. However, they wrote, “They do inform and challenge current theoretical models of addiction.” Associations Precede Substance Use A post hoc analysis further challenges current models of addiction. When researchers looked only at the 1203 youth who initiated substance use after the baseline neuroimaging session, they found most associations preceded substance use. “That regional associations may precede substance use initiation, including less cortical thickness in the right rostral middle frontal gyrus, challenges predominant interpretations that these associations arise largely due to neurotoxic consequences of exposure and increases the plausibility that these features may, at least partially, reflect markers of predispositional risk,” wrote the authors. A study limitation was that unmeasured confounders and undetected systemic differences in missing data may have influenced associations. Sociodemographic, environmental, and genetic variables that were not included as covariates are likely associated with both neuroanatomical variability and substance use initiation and may moderate associations between them, said the authors. The ABCD Study provides “a robust and large database of longitudinal data” that goes beyond previous neuroimaging research “to understand the bidirectional relationship between brain structure and substance use,” Miller said in a press release. “The hope is that these types of studies, in conjunction with other data on environmental exposures and genetic risk, could help change how we think about the development of substance use disorders and inform more accurate models of addiction moving forward,” Miller said. Reevaluating Causal Assumptions In an accompanying editorial, Felix Pichardo, MA, and Sylia Wilson, PhD, from the Institute of Child Development, University of Minnesota Twin Cities, Minneapolis, suggested that it may be time to “reevaluate the causal assumptions that underlie brain disease models of addiction” and the mechanisms by which it develops, persists, and becomes harmful. Neurotoxic effects of substances are central to current brain disease models of addiction, wrote Pichardo and Wilson. “Substance exposure is thought to affect cortical and subcortical regions that support interrelated systems, resulting in desensitization of reward-related processing, increased stress that prompts cravings, negative emotions when cravings are unsated, and weakening of cognitive control abilities that leads to repeated returns to use.” The editorial writers praised the ABCD Study for its large sample size for providing a level of precision, statistical accuracy, and ability to identify both larger and smaller effects, which are critical for addiction research. Unlike most addiction research that relies on cross-sectional designs, the current study used longitudinal assessments, which is another of its strengths, they noted. “Longitudinal study designs like in the ABCD Study are fundamental for establishing temporal ordering across constructs, which is important because establishing temporal precedence is a key step in determining causal links and underlying mechanisms.” The inclusion of several genetically informative components, such as the family study design, nested twin subsamples, and DNA collection, “allows researchers to extend beyond temporal precedence toward increased causal inference and identification of mechanisms,” they added. Note: This article originally appeared on Medscape .

  • Loneliness, Isolation Affect One Third of US Adults Over 50

    TOPLINE: About one third of US adults aged 50-80 years report feeling lonely and socially isolated, a new study of data from 2018-2024 shows. While the levels have returned to the pre-pandemic range, investigators say the findings suggest clinicians should screen for loneliness and isolation. METHODOLOGY: Researchers conducted a nationally representative survey of US adults aged 50-80 years through the University of Michigan National Poll on Healthy Aging at six timepoints between 2018 and 2024. Data collection involved online surveys conducted using the Ipsos KnowledgePanel from 2018 to 2021, transitioning to online and phone surveys conducted using the National Opinion Research Center AmeriSpeak panel from 2022 to 2024. Sample sizes ranged between 2051 and 2576 respondents, with completion rates ranging from 61% to 78% across the survey periods. TAKEAWAY: Loneliness rates among adults aged 50-80 years showed notable fluctuation, starting at 34% (95% CI, 31.7%-36.2%) in 2018, rising to 41% (95% CI, 39.1%-43.7%) in 2020, and returning to 33% (95% CI, 31.7%-35.1%) by 2024. Social isolation showed a similar pattern in the study group, starting at 27% (95% CI, 24.5%-28.8%) in 2018, peaking at 56% (95% CI, 53.4%-58.1%) in 2020, and declining to 29% (95% CI, 27.5%-30.9%) by 2024. Higher loneliness and social isolation rates were frequently reported among individuals who did not work, lived alone, had lower household incomes, and had self-reported fair and poor physical and mental health than those who reported excellent, very good, or good health. IN PRACTICE: The findings suggest that “much like routinely asking about diet and exercise, clinicians should consider screening older adults for loneliness and social isolation and connect them with appropriate resources,” the investigators wrote. SOURCE: The study was led by Preeti N. Malani, MD, MSJ, University of Michigan Medical School, Ann Arbor, Michigan. It was published online on December 9 in JAMA. LIMITATIONS: The study was limited by possible recall bias, reliance on self-reported data, lack of longitudinal results, and differences in survey timing, panels, and question framing across years. The findings may not have been applicable to excluded groups such as nursing home residents or individuals aged > 80 years, which limited their generalizability. DISCLOSURES: The study was supported by AARP and Michigan Medicine and the Department of Veterans Affairs, Veterans Health Administration, and Health Systems Research. One author reported receiving consulting fees and honoraria from various organizations. Details are provided in the original article. Note: This article originally appeared on Medscape .

  • The Tapestry of Neuroplasticity: Rewiring Our Brain

    Key Takeaways Neuroplasticity is central to psychiatry, with brain health factors like nutrition and exercise being crucial for its optimization. BDNF is a key intermediary in neuroplasticity, with decreased levels associated with psychiatric disorders. Treatments like psychotherapy, ECT, and certain medications enhance BDNF, promoting neuroplasticity. Medication-assisted psychotherapies using psychoplastogens show promise but face regulatory challenges. A new treatment paradigm is needed in psychiatry to support enduring neuroplastic changes for optimal patient outcomes. As Psychiatric Times celebrates its 40th year, psychiatry’s tapestry is just beginning. The threads that currently exist seem to be weaving into a magnificent fabric that we have named neuroplasticity. Our challenge and opportunity is to optimize all of the threads we currently understand and remain open to the many additional threads that are currently unknown or just beginning to appear. Some of the most important threads that facilitate brain health, which provides a necessary foundation for neuroplasticity to occur, get lost in the Western medical model treatment process, where time is limited and interventions commonly focus on medications and procedures. We know now that the same factors that maximize cardiac health also maximize brain health: healthy blood pressure/cholesterol levels/blood glucose/body mass index, minimal substance use, quality sleep, good nutrition, physical activity, ability to manage stress, and supportive relationships. In addition to these, the brain thrives when it is learning new information, encountering novel experiences, and solving problems. Although certainly not a panacea, neuroplasticity provides hope that we have an inborn ally during our journey toward healing and maximum functioning. The Birth of Neuroplasticity In the 1970s, experimental research that involved severing the afferent neuron of a monkey’s limb just before where it enters the spinal cord to ascend to the brain led to the revolutionary field of neuroplasticity. This research extended into the early 1990s when it demonstrated that the monkey’s brain undergoes cortical remapping and connects to any adjacent active neurons when it ceases receiving input from the deafferented limb. Thus, the neuroplasticity property of the human brain was born. Learning Music as a Model During that early excitement, researchers looked for potential models that could quantify brain changes over time in an activity that required rigorous ongoing training. Researchers hypothesized that musicians provided an ideal model to study brain plasticity by using neuroimaging to monitor brain structure and function before and after a period of intensive training. Another research team reported on structural brain changes in early childhood following 15 months of musical training. Structural changes correlated with improvements in relevant motor and auditory musical skills.3 Similarly, a review of studies looking at changes in brain structure and function in musicians after their musical practice, magnetic resonance imaging demonstrated structural plasticity while neurophysiological activation patterns demonstrated functional plasticity. The authors concluded “experience can shape brain anatomy and brain physiology. BDNF: A Proxy for Neuroplasticity An established intermediary in the process of neuroplasticity is brain-derived neurotrophic factor (BDNF), which is present throughout the peripheral and central nervous systems, especially in the hippocampus and prefrontal cortex. BDNF has been shown to facilitate the maturation, differentiation, and longevity of neurons. One well-established pathway involves a presynaptic glutamate surge. Postsynaptically, this agonizes AMPA-glutamate receptors, which induces production of BDNF. BDNF agonizes tropomyosin-related kinase B receptors, which promote a cascade of molecular events, culminating in increased activity of the mammalian target of rapamycin (mTOR). mTOR serves as the orchestrator of the synthesis of scaffolding proteins and dendritic growth/synaptogenesis. Many psychiatric disorders are associated with decreased levels of BDNF compared with those found in healthy individuals. A growing literature has looked at serum and/or plasma BDNF levels as a proxy for neuroplasticity in patients pre- and post treatment. The strongest data supporting BDNF as a proxy for neuroplasticity comes from research in psychopharmacology, with a growing body of evidence supporting BDNF’s role in psychotherapy. Psychotherapy and Other Treatments Jeffrey Schwartz, MD, a pioneer in demonstrating the ability of cognitive behavior therapy to change brain chemistry without medications in patients with obsessive-compulsive disorder,6 demonstrated a significant bilateral decrease in caudate nucleus metabolic rates of glucose through positron emission tomography after 10 weeks of intensive exposure-response prevention and cognitive behavior therapy treatment, resulting in symptomatic improvement. Subsequent research has supported the hypothesis that psychotherapy can facilitate a rewiring of the brain through the production of increased BDNF. Other investigators have even hypothesized that good quality psychotherapy may facilitate interbrain plasticity between the therapist and patient during their sessions as a result of recurrent exposure to high interbrain synchrony that can lead to enduring neuroplastic changes in the patient that are then activated in relationships outside of therapy. Beyond psychotherapy, there are other treatments that impact neuroplasticity. Electroconvulsive therapy (ECT), one of psychiatry’s oldest treatments, has demonstrated increased BDNF levels in individuals with medication-resistant depression. For most patients, the significant increase in BDNF appeared 1 month after the completion of the ECT. Psychopharmacology One intriguing property of certain psychiatric medications is their effect of increasing BDNF, sometimes as soon as several hours after administration. We are early in understanding the effect on BDNF by specific antidepressants and antipsychotics, although it is well-documented that BDNF plays an important role in depression, bipolar disorder, and psychosis, especially in the hippocampus and the prefrontal cortex. Lithium is an example of such an agent. It has been shown to upregulate BDNF, and chronic treatment with lithium has demonstrated an associated increase in BDNF when administered in therapeutic as well as low doses in both the hippocampal and cortical regions of the brain. Neuroimaging studies have demonstrated an association between long-term lithium treatment and increased gray matter volume in the ventral prefrontal cortex and other brain regions related to cognition and emotional processing. It has also been proposed that chronic lithium treatment in patients with bipolar disorder may delay or decrease the risk of the onset of dementia. Medication-Assisted Psychotherapy Psychoplastogens are a molecularly diverse class of medications that initiate rapid neuronal plasticity via the rapid production of BDNF.16,17 These medications include 3,4-methylenedioxymethamphetamine (MDMA), ketamine, esketamine, psilocybin, and lysergic acid diethylamide (LSD). The downstream effect is increased activity of mTOR. With medication-assisted psychotherapies (MAPs), the psychoplastogens are administered to patients early in treatment and commonly in a small number of doses—usually between 1 and 3—in the presence of the psychotherapist(s) with whom they will work intensively (for up to as many as 15 sessions). Hypothetically, the psychoplastogen (eg, MDMA) increases patients’ access to difficult psychological material, it facilitates a meaningful experience with a strong emotional response (as with psilocybin/LSD), and/or it creates a window of neuroplasticity that can enhance improved functioning of a dysregulated circuit (as with ketamine/esketamine). With all 3 agents, an extended period of neuroplasticity may allow for significant and enduring changes in neuronal connectivity through the intensive psychotherapeutic process that is an integral part of the treatment. (Disappointingly, MAP received a setback in August 2024 when the FDA decided against approving MDMA-assisted psychotherapy and requested an additional phase 3 study.) Conclusion Thirty years ago, the medical profession and neuroscientists believed that the human brain was fully wired at birth. In retrospect this was naive, as we have the ability to learn new information, languages, musical instruments, and technical skills, and adapt to significant changes throughout our lives. Where but in the brain would the capacity for all of this learning, adaptation, and mastery occur? The catch is that it takes time, motivation, practice, and intention for us to orchestrate this neuroplasticity. In my opinion, the current structure of clinical psychiatric practice lacks these qualities, with a focus on the 15-minute medication check as the common standard time spent with a patient. If our treatment goal is to facilitate enduring neuroplastic changes in our patients’ brains, we must create a new treatment paradigm that supports that process. Then we can truly continue to weave our tapestry. Note: This article originally appeared on Psychiatric Times .

  • Neuroplasticity in Action

    Key Takeaways Phantom limb syndrome is explained by cortical remapping, where the brain reassigns sensory inputs to adjacent areas post-amputation. Edward Taub's Constraint-Induced Movement Therapy (CIMT) promotes neuroplasticity, aiding recovery in stroke patients by constraining healthy limbs. Functional MRI studies confirm increased brain activity in areas associated with improved function, supporting the effectiveness of CIMT. Neuroplasticity enables the adult nervous system to reorganize post-lesion, enhancing rehabilitation outcomes and shifting paradigms in neurorehabilitation. After reading about cortical remapping of the monkey’s brain, neurologist V.S. Ramachandran, MD, PhD, surmised that a similar phenomenon may explain the baffling syndrome of phantom limb, which had befuddled medical researchers and clinicians alike since it was first described by Silas Weir Mitchell, MD, shortly after the Civil War. Ramachandran examined a 17-year-old adolescent who had his left arm amputated 3 weeks prior in a motor vehicle accident and reported that it felt as if this arm was still present. Having the patient keep his eyes tightly closed, Ramachandran brushed the patient’s left cheek with a cotton swab in various locations. Upon asking the patient where he felt the sensations, the patient noted it was on his left cheek, but also on his amputated hand, thumb, and index finger. Remarkably, the patient reported that when he felt an itch on his phantom palm, he felt relief by scratching his lower face. Ramachandran hypothesized that this patient’s brain had cortically remapped the postamputation silent somatosensory cortex on the postcentral gyrus associated with the left hand to the adjacent incoming neurons from the facial nerve. It is well established that the area of the postcentral gyrus that maps to the left hand borders the area that maps the left face. This was a concrete example of neuroplasticity in action. In the 1980s, Edward Taub, PhD, the chief scientist at the Institute for Behavioral Research in Silver Spring, Maryland, developed a hypothesis based on his research with monkeys: If the innervation of 1 arm was seriously damaged but the healthy arm was constrained so it could not be used to accomplish necessary tasks, the possibility existed for the patient to regain function in the damaged arm with structured physical therapy. During the 1990s this hypothesis was proven by research in post cerebral vascular accident (CVA) using a technique called Constraint-Induced Movement Therapy (CIMT). In a review article that detailed the clinical applications in humans that had thus far been established, the authors wrote, “Research from several laboratories has shown that the adult nervous system can reorganize after a lesion,” concretizing the paradigm shift toward human neuroplasticity. Also in 2002, researchers leveraged functional magnetic resonance imaging to confirm an increase in brain activity in the cortical area involved in the increased function. Using functional magnetic resonance imaging technology, patients who engage in this [CIMT] therapy have been shown to have increased activity in their contralateral premotor and secondary somatosensory cortex in association with improved function. A 2011 review on the management of CVAs summarized the research demonstrating the role of neuroplasticity in rehabilitation from strokes, and the effectiveness of CIMT. Note: This article originally appeared on Psychiatric Times .

  • Is the Case Settled? Cannabis and Criminal Responsibility

    Key Takeaways Cannabis-induced psychosis complicates criminal responsibility, with settled insanity as a potential defense if symptoms persist beyond intoxication. Legal standards for insanity and diminished capacity vary by jurisdiction, affecting the applicability of voluntary intoxication as a defense. Understanding the relationship between substance use and mental health is crucial for forensic evaluations, especially with increasing cannabis prevalence. Case Study “John,” a 25-year-old man with no prior criminal history, was arrested after breaking into a convenience store in the middle of the night and assaulting the clerk with a knife. Witnesses reported erratic and violent behavior, including shouting about being chased by invisible enemies. Upon arrest, John was incoherent, disoriented, responding to internal stimuli, and exhibiting signs of paranoia. He was charged with assault with a deadly weapon. While incarcerated for several months, John continued to exhibit signs of psychosis and was found incompetent to stand trial. His family revealed that John had heavily used high-potency cannabis for several years and had experienced increasing paranoia and auditory hallucinations over the past year. After more than 6 months of involuntary treatment with antipsychotic medication, John was ultimately deemed competent to stand trial. At trial, John’s defense team contended that he was not guilty by reason of insanity, asserting that his prolonged cannabis use had led to a state of settled insanity, which he was experiencing at the time of the crime. However, the prosecution countered by pointing to the fact that John’s psychosis resulted from voluntary intoxication, a condition often excluded from insanity defenses. Criminal Responsibility Findings from studies have shown an increased use of cannabis in the US resulting from its legalization for recreational purposes. However, findings from studies on the effects of recreational cannabis legalization on mental health issues have shown mixed results. Recent cases have highlighted the relationship between cannabis and criminal responsibility, particularly with regard to intoxication and its impact on a defendant’s mental state. Criminal responsibility refers to the extent to which a person can be held legally accountable for committing a crime. In such cases, defendants may raise various mental defenses to argue that their mental state at the time of the offense diminishes or negates their responsibility for the act. The insanity defense is one of the most well-known mental defenses. The exact standard for determining legal insanity has differed across time and jurisdictions, but it typically means that due to a mental illness or defect, an individual cannot recognize an act is wrong, fails to understand the nature or quality of the act, or is unable to control their behavior. Typically, a substance-induced disorder resulting from the voluntary consumption of an intoxicating substance, such as cannabis, does not constitute a mental disease or defect for the purposes of the insanity defense. However, in certain jurisdictions, the defense of settled insanity may offer an alternative route for defendants whose cannabis use leads to prolonged psychotic symptoms. If it can be shown that substance use triggered or worsened psychotic symptoms that continued beyond the acute effects of intoxication, the defendant may be eligible to use the defense of settled insanity. Settled Insanity The standard for settled insanity varies by jurisdiction and is not always clearly defined. Although the specific criteria may differ across regions, the challenge of separating the effects of mental illness from substance use remains consistent. Establishing a clear timeline for the onset of substance use and symptoms of mental illness can be difficult and at times even impossible. Evidence indicating that the defendant showed signs of mental illness close to the time of the offense but prior to substance use or that the symptoms present during the offense persisted beyond the effects of intoxication may support the determination of settled insanity. For instance, the persistence of psychotic symptoms far beyond what would be expected from cannabis intoxication alone, requiring significant clinical intervention, could support an opinion of symptoms that are more settled and not merely a result of acute intoxication. Once it is established that a lasting impairment may be at play, the forensic evaluator must assess whether the symptoms meet the statutory criteria for the insanity defense. Experts assessing defendants for a potential insanity defense in cases involving substance use may consider examining the factors outlined in the Table. Diminished Capacity All crimes, except strict liability offenses (where the prosecution is not required to prove that the defendant had intent or knowledge of committing the crime), require both an action (actus reus) and criminal intent (mens rea). If mens rea is absent, an individual cannot be found guilty of a crime that necessitates intent. Courts distinguish between general intent and specific intent crimes, with voluntary intoxication being relevant to the latter. Specific intent crimes require not only the intent to perform the criminal act but also the intention to achieve a specific result. If cannabis intoxication impaired the defendant’s ability to form this specific intent, it could serve as a partial defense. For example, a defendant charged with a specific intent crime, such as first-degree murder, may argue that due to cannabis-induced psychosis, they were unable to premeditate or intentionally carry out the killing. This in turn could result in a reduction of charges. A general intent crime requires that the defendant intended to commit the criminal act but not necessarily to achieve a particular result. For these crimes, it is enough that the individual acted with a wrongful purpose, even if they did not intend to cause a specific outcome. Voluntary intoxication is never accepted as a defense, not even a partial one, for general intent crimes. Mitigation Voluntary intoxication may be considered during the sentencing phase of a criminal proceeding. Some jurisdictions allow judges to consider the defendant’s intoxicated state when determining the severity of the sentence. In Texas, for example, voluntary intoxication may be cited as a mitigating factor in sentencing. A mitigating factor is a circumstance or piece of evidence that can reduce the severity of a defendant’s sentence or level of criminal responsibility, even if they are found guilty of a crime. Mitigating factors do not excuse the crime but help explain why it may have occurred, suggesting that a lesser punishment is more appropriate. Concluding Thoughts The laws regarding voluntary intoxication and mental health defenses differ significantly among jurisdictions, making it important for forensic psychiatrists to be well-versed in the legal standards of the area where a case is being tried. Whether or not voluntary intoxication can be used to argue diminished capacity or as a mitigating factor at sentencing varies widely. The recognition of settled insanity where long-term substance use results in lasting mental impairment even after intoxication has ended adds another layer of complexity. Understanding the complex relationship between substance use and mental health is critical to providing thorough and well-informed evaluations. Staying up to date on both the evolving science of substance-induced psychiatric conditions and jurisdiction-specific legal standards ensures that forensic assessments remain aligned with contemporary practices and legal standards. As cannabis and other substances become more prevalent in legal contexts, expertise in this area will only grow in importance. Note: This article originally appeared on Psychiatric Times .

  • US Dementia Cases Projected to Double Within 40 Years

    The number of US adults who will develop dementia each year is projected to increase from approximately 514,000 in 2020 to about 1 million in 2060, new research shows. In addition, the lifetime risk of developing dementia after age 55 is estimated at 42%. The research showed that the relative growth in dementia cases is particularly pronounced for Black adults. These new findings researchers say, “highlight the urgent need for policies that enhance healthy aging, with a focus on health equity.” “The aging of the population means that the increased burden of cognitive decline and dementia, particularly among the oldest age group, is going to be significant, and we need to be prepared for it,” study investigator Josef Coresh, MD, PhD, director of the Optimal Aging Institute at NYU Grossman School of Medicine, New York City, told Medscape Medical News. He added that dementia may be preventable through such strategies as controlling vascular risk factors, treating sleep disorders, detecting and correcting hearing loss, managing mood disorders, and improving access to social support. The findings were published online on January 13 in Nature Medicine . Diverse Cohort, More Rigorous Methodology Over the past century, the US population has aged substantially, resulting in a rise in late-life diseases. Once an uncommon condition, dementia now affects more than 6 million Americans. The lifetime risk for dementia is a critical public health measure that can be used to raise awareness, enhance patient engagement in prevention, and inform policymaking, the investigators noted. The often-cited Framingham Heart Study estimates that 11%-14% of men and 19%-23% of women will develop dementia during their lifetime. But, as Coresh noted, these estimates were based on a predominantly White, relatively affluent, and well educated cohort, as well as limited means of determining dementia cases. In contrast, this new study is based on a more diverse cohort and used more rigorous methodology, Coresh said. “It’s capturing the latest decade of risk, from age 85 to 95 years,” which is important because people are living longer, he added. The report analyzed data collected from 1987 to 2020 from 15,043 participants in the Atherosclerosis Risk in Communities (ARIC) study. Drawn from four US communities, these individuals were all dementia-free of dementia at age 55. One of the study sites was Jackson, Mississippi, where Black individuals comprise 82% of the population. Black individuals accounted for 26.9% of the total study population, 55.1% of participants were women, and 30.8% carried at least one copy of the apolipoprotein E (APOE) epsilon 4 allele (28.1% with one copy and 2.7% with two copies). Over the past three decades, study participants underwent clinical examinations, including cognitive testing, laboratory testing, and both in-person and telephone interviews. In addition to interviews, the ARIC study uses review of hospital records and death certificates to determine dementia, with cases adjudicated by a committee assisted by a computer algorithm. Over a median follow-up of 23 years, 3252 new cases of dementia were identified. At age 55, researchers estimated the lifetime risk for dementia (up to age 95) to be 42% (95% CI, 41-43). The cumulative incidence of dementia remained relatively low between 55 and 75 years of age (3.9%) but rose significantly beyond that. Over a median follow-up of 23 years, there were 3252 incident cases of dementia. The lifetime risk for dementia was higher in women (48%; 95% CI, 46-50) vs men (35%; 95% CI, 33-36) and in Black individuals (44%; 95% CI, 41-46) vs White individuals (41%; 95% CI, 40-43). Additionally, Black individuals experienced an earlier median age of dementia onset (79 years) compared with White individuals (82 years). “By age 75, the risk is 3% in our White participants and 7% in our Black participants, so the racial difference is expressed early, and it stays to age 85 and then sort of closes and becomes smaller by age 95,” said Coresh. Structural Racism The racial disparity in dementia risk may reflect the impact of structural racism and socioeconomic inequality, including limited access to quality education and nutrition, said Coresh. Additionally, it could be influenced by poorer access to healthcare and a higher prevalence of vascular risk factors, such as hypertension and diabetes, he added. “People who were born in the South a long time ago may not have had the same opportunities as people in other areas and of other races and ethnicities.” The substantially higher risk among women is thought to reflect women’s longer life expectancy. Adults with two copies of the APOE epsilon 4 allele had a higher lifetime risk for dementia (59%; 95% CI, 53-65) compared with those with one copy (48%; 95% CI, 45-50) and those with no copies (39%; 95% CI, 37-40). Dementia also occurred earlier in APOE epsilon 4 carriers (median age, 79 years in those with two copies, 81 years in those with one copy, and 82 years in those with no copies). Applying the lifetime risk estimates to US Census population projections, the researchers predict the annual number of incident dementia cases will increase from about 514,000 in 2020 to 1 million in 2060. The number of individuals who will develop dementia each year is expected to nearly double among White individuals and triple among their Black counterparts. These findings highlight the importance of addressing the growing number of Baby Boomers transitioning into the oldest age group, Coresh noted. As more individuals live into their 80s and beyond, the medical community must focus on managing early symptoms and implementing effective prevention strategies, he added. Such strategies should include managing risk factors, such as social isolation, disordered sleep, mood issues, poor cardiovascular health, and uncorrected hearing loss, he said. As it stands, research suggests only about 20% of US adults meet recommended lifestyle and cardiovascular health targets, and only 30% of older adults with hearing loss are using a hearing aid. Dementia before age 75 may be underestimated in the study because up until this age, dementia was ascertained retrospectively with phone interviews and review of hospital and death records. Other limitations of the study include the lack of external validation of the results, limited generalizability of the projections to the entire US population, and exclusion of racial groups other than White and Black from the analysis. A Crisis in the Making Commenting for Medscape Medical News, Maria C. Carrillo, PhD, chief science officer and medical affairs lead for the Alzheimer’s Association, said the new findings reinforce what the Alzheimer’s Association has long emphasized: that the risk and prevalence of Alzheimer’s disease and other dementias are expected to rise significantly in the coming years. “There’s an urgent need to address the crisis of Alzheimer’s disease and dementia in this country and globally,” she said. These new data confirm the high risk for dementia among minority groups as well as women who have often been underrepresented in research, said Carrillo. The good news, though, is that this is “an exciting and hopeful time” in the field, with Alzheimer’s treatments being approved that slow the progression of early disease and with more treatments in the pipeline. “There are also better ways available now to detect and diagnose Alzheimer’s, and we’re learning more every day about risk reduction for Alzheimer’s and all other dementias,” Carrillo said. Carrillo added the Alzheimer’s Association is leading the U.S. POINTER Study, a 2-year clinical trial evaluating whether lifestyle interventions targeting risk factors can protect cognitive function in older adults at increased risk for cognitive decline. Results of that study will be reported later this year at the Alzheimer’s Association International Conference. She also pointed to last summer’s Lancet Commission Report showing that 40% of global dementia is potentially avoidable with behavioral and lifestyle changes. “The main message is that there’s hope that we can, in fact, change the trajectory,” that by making such changes, “we won’t hit the numbers that this new paper talks about until 2060.” However, she noted there’s still more work to be done. For example, she said, there’s an urgent need for more research into how to detect dementia as early as possible. Carillo emphasized the importance of checking patients’ blood pressure and cholesterol, and conducting other tests to determine risk and discuss lifestyle changes that could result in dementia risk reduction. Note: This article originally appeared on Medscape .

bottom of page