The sequential, multiple assignment, randomized trial (SMART) design

Shengping Yang PhD, Gilbert Berdine MD

Corresponding author: Shengping Yang
Contact Information: Shengping.Yang@pbrc.edu
DOI: 10.12746/swrccc.v12i50.1281

I am planning a randomized trial to assess two interventions for preventing smoking relapse: counseling sessions and nicotine replacement therapy. I am considering whether a Sequential, Multiple Assignment, Randomized Trial (SMART) design is appropriate.

A Sequential, Multiple Assignment, Randomized Trial (SMART) design is a dynamic and adaptive approach to clinical trials, aiming to optimize intervention/treatment strategies for complex health conditions. It involves multiple stages of randomization, allowing tailored adjustments of interventions based on individual responses over time. Unlike fixed intervention plans, SMART trials naturally accommodate diverse patient responses by exploring multiple intervention regimens, reflecting the variability observed in real-world scenarios.

SMART designs are particularly well-suited for chronic, heterogeneous conditions/diseases with the potential for recurrence. In cases where a widely effective intervention is not available for all individuals, and different subjects exhibit varied responses to the same treatment, SMART designs become a desirable choice. This is because a SMART design takes advantage of the availability of multiple intervention options, makes adaptive decisions, and has the potential to evaluate the effect of intervention sequence on overall outcomes. In general, SMART trials are often designed to identify the most effective intervention plan tailored for an individual, providing an attractive alternative to the one-size-fits-all approach. By embracing the complexity of diverse patient responses, SMART designs provide the possibility for precision medicine, offering personalized strategies that align more effectively with the unique needs of individuals.1

1. BACKGROUND

While the specific term “SMART” might not have been coined until the late 20th century, the underlying principles trace back to the broader field of adaptive clinical trial designs. Early adaptive designs often focused on modifying trial parameters based on results from interim analyses, and SMART designs took it a step further by addressing heterogeneity in intervention responses and incorporating sequential randomizations to dynamically tailor interventions. The concept gained prominence as researchers sought more efficient and personalized approaches to intervention evaluations. Since then, SMART designs have evolved and gained recognition as a powerful tool for optimizing intervention sequences, identifying the most effective strategies, and tailoring interventions to individual patient characteristics.2

2. COMPONENTS OF A SMART DESIGN

There are generally four components in a SMART design: intervention options, decision stages, tailoring variables, and decision rules.

2.1. INTERVENTION/TREATMENT OPTIONS

These include all available intervention types, modules, doses, and intervention options, etc.

2.2. DECISION STAGES

These are specific time points or phases within the trial where intervention decisions are made based on participant responses or other predetermined criteria.

2.3. TAILORING VARIABLES

A tailoring variable is a variable or set of variables used to guide the adaptation of intervention strategies at various decision stages. The choice of the tailoring variable is crucial because it serves as a key determinant in deciding which intervention is most suitable for an individual participant based on their characteristics, responses, or other relevant factors.

There are baseline and intermediate tailoring variables. The former includes information obtained for making the first decision, such as participant demographic variables and baseline health conditions, while the latter includes information obtained at any other time, such as biomarkers associated with the interventions and is used for making decisions at subsequent stages. In general, tailoring variables are expected to have predictive value in terms of how individuals are likely to respond to different interventions. They are also expected to be information that can be obtained timely and accurately, facilitating the development of responsive and effective intervention strategies.

2.4. DECISION RULES

Decision rules are predefined criteria used to guide which intervention options to use in the next intervention or strategy at each stage of the trial. For example, in many SMART trials, a decision rule can be that if a participant responds at a certain stage, then the next stage intervention will be the same as the current stage; otherwise, they will be randomly assigned to new interventions. These types of decision rules can be used to determine the most appropriate intervention for each participant at each decision stage. Decision rules are an appealing feature of SMART trials because they mimic the decision-making process in real life.

In SMART designs, decision rules and tailoring variables have crucial roles in adapting interventions based on individual responses over time.3–5

3. AN EXAMPLE SMART DIAGRAM

Figure 1 illustrates a two-stage SMART design. In the first stage, all participants undergo randomization into either intervention A or B. For those allocated to intervention A, a positive response (determined by the tailoring variable and decision rule, such as a fasting glucose lower than 100 mg/dL at the end of stage-one intervention) leads to their continuation with intervention A in stage two. Conversely, if participants do not respond to intervention A, a second randomization ensues, assigning them to either intervention A+B or intervention C. A similar process applies to participants initially randomized to intervention B in stage one. Particularly, stage-one randomization can occur either at participant enrollment or after the completion of a specific intervention for all participants (not depicted in Figure 1). Following stage-one intervention, participants are categorized into responders and non-responders based on the tailoring variables. Responders typically persist with their current intervention in stage two, mirroring real-world practices. Non-responders undergo randomization to interventions, such as the addition of intervention components or new interventions. Thus, this design has the potential to yield a superior average outcome compared to classic designs because the interventions at each stage can be adaptive and tailored to enhance participant outcomes.7 In the study you are planning, the two intervetions are counseling sessions and nicotine replacement therapy, and adaptive decisions can be made at the interim analysis.

Figure 1

Figure 1. An example of a two-stage SMART design. R represents the time of randomization. Letters A to F are used to distinguish participants who received different intervention regimens. Numbers I to VI represent participants assigned to different groups.

One of the unique features of SMART designs is the Embedded Adaptive Intervention (Embedded AI). Specifically, it is a treatment strategy integrated into the study design and is an essential component of SMART trials. For example, there are four Embedded AIs in the above example:

The term ‘embedded’ emphasizes that the adaptive intervention is an integral part of the study, incorporated into the trial design to provide personalized and adaptive approaches.

4. QUESTIONS THAT CAN BE ANSWERED BY A SMART DESIGN

SMART designs are formulated to address research questions that classic randomized clinical trials often cannot answer due to their multi-stage, multi-assignment, and adaptive nature. Here are some of the questions that a SMART trial can address:

5. ADVANTAGES OF SMART DESIGNS

SMART designs offer several advantages compared to classic randomized designs. Here are some key advantages:

6. DATA ANALYSIS, SAMPLE SIZE, AND POWER CALCULATIONS

While the selection of data analysis methods is ultimately guided by scientific considerations specific to the study area, the followings are the data analyses commonly employed:

6.1. EVALUATION OF THE MAIN EFFECTS

In the provided example, comparisons can be conducted to assess both the first and second-stage main effects. For the first stage, the focus is to determine the most effective first-line intervention – essentially, whether there is any difference between interventions A or B as the first-line approach. This question can be addressed through the comparison between groups I+II+III and groups IV+V+VI based on the study outcome. For the second stage, the focus is to identify the most effective intervention for those who did not respond to the first-stage intervention, in other words, is there a difference between intervention A+B and intervention C for the non-responders? This question can be addressed by comparing groups II+III and groups V+VI.

The sample size calculation for the main effect mirrors that of the two-sample t-test or ANOVA. However, for the first-stage main effect, all participants contribute to the power calculation, while for the second-stage main effect, only non-responders are considered in the power calculation. It is crucial to note that obtaining an accurate sample size/power calculation for the second-stage main effect requires a robust estimate of the percentage of first-stage non-responders, where tailoring variables and decision rules play crucial roles.

6.2. EVALUATION OF THE EMBEDDED AIS

In the example above, there are four Embedded AIs, corresponding to groups I+II (Embedded AI #1), groups I+III (Embedded AI #2), groups IV+V (Embedded AI #3), and groups IV+VI (Embedded AI #4). Comparisons can be made, for example, between Embedded AIs #1 and #3, #1 and #4, etc. It is worth noting that the comparison between two Embedded AIs can be biased, and this bias arises because each Embedded AI includes all the responders and a proportion of the non-responders. Therefore, the estimate of the average outcome represents a bias toward more responders in a population. In addition, the construction of the Embedded AI can result in some participants being consistent with more than one Embedded AI; for example, subjects in group I are included in both Embedded AIs #1 and #2. To adjust for this, comparisons can be performed as described in Nahum-Shani et al.6,7 There are sample size calculation methods developed for evaluating Embedded AIs,8 however, they are not the focus of this article.

There are also Q-learning methods developed for assessing the relative quality of intervention options.9 They are also beyond the scope of this article.

7. OTHER CONSIDERATIONS

There are various types of SMART designs, and with careful consideration of the study goals, a SMART design can be deeply tailored to answer specific research questions. Multiple stages are inherent in a SMART design, and the sequence of interventions is not only related to the effectiveness of combined interventions but also to the prioritization of interventions based on research interest. For instance, in the early stages, the number of participants per group is often larger, making it preferable to arrange comparisons that are of greater interest for a study.

8. DISADVANTAGES OF ADAPTIVE DESIGNS

There are always two sides to a coin. Adaptive designs improve efficiency by pruning unproductive strategies. However, effects that do not become visible until long after the decision branch time frame will be missed. In chess analysis engines, moves that immediately appear to be bad–due to immediate loss of material–may be pruned from further analysis. However, sacrifices will be missed unless the move number time frame prior to pruning is sufficiently long to capture the benefit of the sacrifice. A medical example of the chess analogy would be immediate death following surgical intervention for early-stage cancer that makes surgical intervention appear to be a poor choice. However, when the time frame of analysis is longer, the attrition due to progression of cancer becomes worse than the immediate short term death following surgery.

Division of trial groups based on adaptive decisions reduces the statistical power of the subsequent treatment branches, so a larger number of patients may be necessary for recruitment. Multicenter trials may be difficult to conduct as disagreements may arise between trial centers or regulatory agencies, such as Institutional Review Boards, at each individual center, or government regulatory agencies, such as Food and Drug Administration, that have oversight over all trials. Bias problems can be introduced by adaptive randomization with reinforcement or magnification of Type I statistical error. It is possible for adaptive randomization to make errors that are not apparent until later which leads to larger numbers of patients randomized to poor outcomes than would have occurred with straight randomization. A discussion of disadvantages for different subtypes of adaptive randomization designs is available elsewhere.10

In summary, the specific framework of SMART emerged as a response to the complexities of chronic and relapsing health conditions, acknowledging the need for adaptive strategies that account for the dynamic nature of individual responses to interventions. SMART designs are developed not only to assess the impact of specific interventions but also to evaluate the effects of sequences and combinations of different interventions. When compared to classic randomized designs, SMART designs are particularly beneficial for scenarios where the intervention under evaluation may exhibit delayed or sequential effects, commonly encountered in real-world situations. However, the complexity of study design and the non-randomness of certain intervention assignments can make data analysis for SMART designs more challenging. Moreover, SMART designs may necessitate a larger number of participants due to the increased number of comparisons. Nevertheless, SMART designs hold appeal for study participants, as they often yield, on average, better outcomes attributable to the implementation of personalized intervention strategies. These advantages collectively contribute to the growing popularity of SMART designs in clinical research, especially in fields where individualized and adaptive treatment strategies are paramount for enhancing patient outcomes.

Keywords: sequential, multiple assignment; randomized trial; SMART; design


REFERENCES

  1. United States Department of Education. an introduction to adaptive interventions and SMART designs in education. https://ies.ed.gov/ncser/pubs/2020001/pdf/2020001.pdf Accessed 1/3/2024.
  2. Almirall D, Nahum-Shani I, Sherwood NE, et al. Introduction to SMART designs for the development of adaptive interventions: with application to weight loss research. Transl Behav Med 2014 Sep;4(3):260–74. doi: 10.1007/s13142-014-0265-0.
  3. Kosorok M Chen J, Chaudhari M, et al. Design and Sample Size Calculation for SMART Studies 2016. doi:10.13140/RG.2.2.13901.90085.
  4. Pfammatter AF, Nahum-Shani I, DeZelar M, et al. SMART: Study protocol for a sequential multiple assignment randomized controlled trial to optimize weight loss management. Contemp Clin Trials 2019 Jul; 82:36–45. doi: 10.1016/j.cct.2019.05.007.
  5. Kopelowicz A, Nandy K, Ruiz ME, et al. Improving self-management of Type 2 diabetes in Latinx patients: Protocol for a sequential multiple assignment randomized trial involving community health workers, registered nurses, and family members. JMIR Res Protoc 2023 Jan 16;12:e44793. doi: 10.2196/44793.
  6. The Methodology Center. Primary data analysis method for comparing adaptive interventions. https://scholarsphere.psu.edu/resources/70c36eac-33dd-4f88-aa3b-9a2f3e78a42a. Accessed 1/3/2024.
  7. Nahum-Shani I, Qian M, Almirall D, et al. Experimental design and primary data analysis methods for comparing adaptive interventions. Psychol Methods 2012 Dec;17(4):457–477. doi: 10.1037/a0029372.
  8. Artman WJ, Nahum-Shani I, Wu T, et al. Power analysis in a SMART design: sample size estimation for determining the best embedded dynamic treatment regime. Biostatistics 2020 Jul 1;21(3):432–448. doi: 10.1093/biostatistics/kxy064.
  9. Nahum-Shani I, Qian M, Almirall D, et al. Q-learning: a data analysis method for constructing adaptive interventions. Psychol Methods. 2012 Dec;17(4):478-94. doi: 10.1037/a0029373.
  10. Korn EL, Freidlin B. Adaptive clinical trials: Advantages and disadvantages of various adaptive design elements. J Natl Cancer Inst 2017 Jun 1;109(6): djx013. doi: 10.1093/jnci/djx013.


Article citation: Yang S, Berdine G. The sequential, multiple assignment, randomized trial (SMART) design. The Southwest Respiratory and Critical Care Chronicles 2024;12(50):58–63
From: Department of Biostatistics (SY), Pennington Biomedical Research Center, Baton Rouge, LA; Department of Internal Medicine (GB), Texas Tech University Health Sciences Center, Lubbock, Texas
Submitted: 1/9/2024
Accepted: 1/17/2024
Conflicts of interest: none
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.