• Users Online: 244
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
REVIEW ARTICLE
Year : 2014  |  Volume : 1  |  Issue : 1  |  Page : 10-12

Importance of sample size in clinical trials


Department of Preventive and Social Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India

Date of Submission07-Oct-2013
Date of Decision19-Dec-2013
Date of Acceptance06-Jan-2014
Date of Web Publication1-Apr-2014

Correspondence Address:
Ganesh S Kumar
Department of Preventive and Social Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry 605 006
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2348-8093.129721

Rights and Permissions
  Abstract 

Adequate sample size is an important issue in clinical trials. This article aimed to assess the effect of various factors on sample size and the importance of adequate sample size in clinical trials. Recent data pertinent to study objective was searched and collected from Pub-med and other sources were analysed. It was found that factors determining adequate sample size have paramount importance in assessing the accurate results, while less or more than the required sample size has many disadvantages. A researcher needs to focus on these issues while determining sample size in a clinical trial. Emphasizing and appropriate handling of all the concerned parameters related to sample size before initiating a clinical trial will improve the validity of the study.

Keywords: Clinical research, clinical trials, sample size


How to cite this article:
Kumar GS. Importance of sample size in clinical trials. Int J Clin Exp Physiol 2014;1:10-2

How to cite this URL:
Kumar GS. Importance of sample size in clinical trials. Int J Clin Exp Physiol [serial online] 2014 [cited 2019 Jan 18];1:10-2. Available from: http://www.ijcep.org/text.asp?2014/1/1/10/129721


  Introduction Top


Adequate sample size is an important consideration for researchers at all levels. However, in clinical trials, its application needs special considerations. Clinical trials often fail to address many parameters related to sample size. Any research study should be able to detect the difference in outcome measure between two or more groups as much as near to reality. As adequate sample size is one of the factors determining the outcome of the study, there is a need to understand the effect of various parameters on sample size and its outcome in clinical trials. A recent article highlighted the importance of sufficient sample size in clinical trial related to oral health. [1] Since sample size calculation depends on various factors, critical analysis and application of each factor will increase the validity of the findings.

There are four phases of a clinical trial. The first phase is initial trial in human beings to assess the toxicity concerned with safety of a new drug. The primary purpose is to look for the tolerable dose without causing serious side effects in a small number of individuals. Phase 2 will be conducted among human volunteers to look for the potential benefit and safety of a new drug. Phase 3 trials include full-scale evaluation including randomization of the subjects to look for the effect in terms of benefits and harms of the drug, while phase 4 involves the post marketing surveillance to look for the applicability of the benefits and harms in larger scale. The clinical trials are classified based on the objective, methodology adopted and hypothesis formulated as therapeutic vs prophylactic, controlled vs uncontrolled, randomized vs non-randomized, efficacy vs effectiveness, non-inferiority and equivalence trials.


  Importance of Adequate Sample Size in Clinical Trials Top


There are two types of errors namely type-1 error (α error) and type-2 error (β error) that have to be taken into consideration in sample size estimation. Type 1 error involves rejection of null hypothesis (H o ) when it is actually true or finding an effect when actually there is no effect. Type 2 error involves acceptance of H o when it is actually false or not finding an effect when actually there is an effect. Both these types of errors should be incorporated in addition to expected difference in study outcome and variability while calculating sample size, as the investigators are not sure about the validity of the decision taken from the sample. Normally, α error i.e. concluding that difference exists when in reality there is no difference is taken at 0.05%, 0.01%, or 0.001% level and power of the study (1-β) i.e. ability of the study to conclude difference when in reality there is a difference set at 80-95% level. These parameters should be incorporated while calculating sample size. Greater sample size is required if researcher wants to assess the lesser difference, adopts smaller α, smaller β or more power and expects poorer adherence to intervention. Lower adherence can reduce the power of the study and misinterpret the result. A correction factor for non-adherence is to multiply the needed sample size by 1/(1-R 0 -R 1 ) 2 where R 0 is the dropout rate and R 1 is the drop in rate. [2] So, it is estimated that combined non-adherence rate of 10% will increase the sample size to nearly a quarter. [2] Estimated time for the trial also affects the sample size. For example, if cholesterol-lowering drugs act at least partly by affecting arterial plaque, then the time for that process to occur implies a larger sample size. [2] Similarly studies with equivalency and non-inferiority may require larger sample size. Studies where two treatments show differences less than a particular level, they are considered as equal or the differences are considered as unimportant. Another issue is even with adequate sample size, the subjects may not be available for study because of various reasons including exclusions made before and after randomization, and loss to follow-up issues which will lead to controversial results. [3] After enrolment, if the subject is not found to be eligible for the trial, the decision to exclude from the study should be taken early, or else it will be considered as manipulation of data.

The clinical efficacy of a new treatment may often be better evaluated by two or more co-primary endpoints. Several methods have been proposed for calculating the sample size required to design a trial with multiple co-primary correlated endpoints. [4] In these circumstances, in the design stage appropriate sample size has to be determined for indicating statistical significance for all co-primary endpoints with preserving the intended power set, since the type II error increases as the number of co-primary endpoints increases. [5] Evaluating the association between each rare variant and treatment response one-at-a-time will require enormous sample sizes. Combining the rare variants together can substantially reduce the required sample sizes, but it requires assumption about the similarity in the effects. [6] In Phase II trials where two stage designs are commonly used, lowest expected sample size will be required for a specific treatment effect. But it can perform poorly if the true treatment effect differs. [7]

Trial designs with a shorter duration of follow-up have increased within-individual variance and require larger sample sizes to detect the same treatment effect. Reduction in the number of examinations within a trial with a given duration, also requires increased sample size to maintain the same power. Longer trial duration and or more frequent examinations within a trial which has repeated measures of an outcome variable, substantially increases study power and reduce the required sample size. If the costs of recruiting, retaining and examining individual participants are known, the sample size, study length and number of examinations can be balanced to optimize the trial design relative to costs or other study objectives. [8]

Sample size in subgroup analyses is another issue that needs attention. It also can produce spurious results. A recent article stated that the increase in sample size may be substantial to identify the differential subgroup effects and the commonly used rule of four may not always be sufficient. According to the rule of four, (a) Subgroups should be restricted to those proposed before data collection and any subgroups chosen after this time should be clearly identified, (b) Trials should ideally be powered with subgroup analyses in mind and subgroup-specific analyses are particularly unreliable and are affected by many factors, (c) Subgroup analyses should always be based on formal tests of interaction although even these should be interpreted with caution, (d) The results from any subgroup analyses should not be over-interpreted. Unless there is strong supporting evidence, they are best viewed as a hypothesis-generation exercise. It is also stated that the implications of considering confidence intervals rather than P values as in observational and meta-analyses could be considered while interpreting such results. [2],[9]

Smaller sample size issues

The disadvantages of smaller or larger than the required sample size are many. Small sample size in clinical trials may not be able to detect the true differences in the outcome or otherwise may not have its required power that can lead to invalid results and wrong conclusions. In case of small sample size, confidence interval of sampling error becomes wider with the relative risk or odds ratio falling on either side of one, concluding that there is no difference in the intervention. As a result, its applicability and utility in the clinical setting will not be utilized in cases where a cheaper intervention methods results in 1% or 2% reduction in deaths or morbidity. Its applicability in preventive strategies will also be enormous. For example, in a study to assess the effect of reduction in salt intake on blood pressure in high risk groups, even significant milder reduction in blood pressure will result in improvement of morbidity and mortality status in this community. It has been shown that because of smaller sample size, many experimental clinical trials have failed to show a statistically significant degree of benefit from the therapy being evaluated. [10] So, investigators should be cautious before discarding premature decisions based on such data and analysis.

Larger sample size issues

On other side, more than required sample size will result in waste of resources. This will also lead to decrease in validity or accuracy because of difficulty in maintaining data quality and high non-response rate. [11] Ethical issue is also one of the determining factors in sample size estimation in clinical trials as more sample size will result in more harm or inferior quality of treatment to one or more groups. To tackle this problem, sequential analysis method can be adopted, where the subjects are brought in to an experiment over a relatively long period of time rather than at once. As soon as the conclusion is reached with adequate power during the course of the study, the experiment can be stopped. But, more extreme P values are required for stopping intervention early. [10]

Early cessation of trials

Clinical trials can be stopped early in various circumstances that include overwhelming benefit, clear harm or futility as well as complicated issues. Randomized control trials (RCTs) stopped early for benefit often show implausibly large treatment effects, which was independent of the presence of statistical stopping rules particularly when the number of events is small. [12],[13] The practice of stopping RCTs early is problematic, especially if the trial is stopped for apparent benefits, which include inappropriate interpretation of results and ethical problems concerning trial participants, clinicians, and society as a whole. [14] In case of harmful effect, it would be inappropriate to continue a trial until the intervention is proven harmful by using the usual P value of 0.05. [2]


  Application of Sample Size in Clinical Research Top


Application of adequate sample size in clinical research results in improving the validity of the results. One must take into consideration the different analytical aspects while calculating the sample size. A sample size calculation for logistic regression involves complicated formulae. The formulae for the simple methods are well known and do not require specialized software. The sample size formulae for comparing means or for comparing proportions in order to calculate the required sample size for a simple logistic regression model, multiple logistic regression model and multiple regression model differ depending on the character of the covariates used in the study. [15] One can then adjust the required sample size for a multiple logistic regression model by a variance inflation factor. One can similarly calculate the sample size for linear regression models. [15] The various approaches to be adopted according to the analytical design requires in depth understanding of the concept and objective of study and the factors influencing it. Therefore, various factors determining the sample size should be considered by a researcher before initiating a clinical trial in order to improve the validity and applicability of the results from the study.


  Conclusion Top


Adequate sample size determination and its application in clinical trials is of paramount importance for the research community. It requires understanding of various factors influencing sample size and study outcome and adoption of appropriate and standardized methods. Emphasizing all these parameters before initiating a clinical trial will increase the validity of the study.

 
  References Top

1.Mickenautsch S. Research gaps identified during systematic reviews of clinical trials: Glassionomer cements. BMC Oral Health 2012;12:18.  Back to cited text no. 1
[PUBMED]    
2.Friedman LM, Schron EB. Methodology of intervention trials in individuals. In: Oxford Text Book of Public Health. 5 th ed. Vol. 2. Oxford: Oxford University Press; 2009. p. 533.  Back to cited text no. 2
    
3.Schulz KF, Grimes DA. Sample size slippages in randomised trials: Exclusions and the lost and wayward. Lancet 2002;359:781-5.  Back to cited text no. 3
    
4.Sugimoto T, Sozu T, Hamasaki T. A convenient formula for sample size calculations in clinical trials with multiple co-primary continuous endpoints. Pharm Stat 2012;11:118-28.  Back to cited text no. 4
    
5.Sozu T, Sugimoto T, Hamasaki T. Sample size determination in superiority clinical trials with multiple co-primary correlated endpoints. J Biopharm Stat 2011;21:650-68.  Back to cited text no. 5
    
6.Witte JS. Rare genetic variants and treatment response: Sample size and analysis issues. Stat Med 2012;31:3041-50.  Back to cited text no. 6
[PUBMED]    
7.Wason JM, Mander AP. Minimizing the maximum expected sample size in two-stage Phase II clinical trials with continuous outcomes. J Biopharm Stat 2012;22:836-52.  Back to cited text no. 7
    
8.Peters SA, Palmer MK, den Ruijter HM, Grobbee DE, Crouse JR, O'Leary DH, et al. Sample size requirements in trials using repeated measurements and the impact of trial design. Curr Med Res Opin 2012;28:681-8.  Back to cited text no. 8
    
9.Brookes ST, Whitley E, Peters TJ, Mulheran PA, Egger M, Davey Smith G. Subgroup analyses in randomised controlled trials: Quantifying the risks of false-positives and false-negatives. Health Technol Assess 2001;5:1-56.  Back to cited text no. 9
    
10.Friedman GD. Making sense out of statistical association. In: Primer of Epidemiology. 5 th ed. McGraw Hill Education (Asia); 2004. p. 218.  Back to cited text no. 10
    
11.Sundaram KR. Estimation of sample size. In: Medical Statistics, Principles and Methods. New Delhi: BI Publications; 2010. p. 245-6.  Back to cited text no. 11
    
12.Montori VM, Devereaux PJ, Adhikari NK, Burns KE, Eggert CH, Briel M, et al. Randomized trials stopped early for benefit: A systematic review. JAMA 2005;294:2203-9.  Back to cited text no. 12
    
13.Bassler D, Briel M, Montori VM, Lane M, Glasziou P, Zhou Q, et al. Stopping randomized trials early for benefit and estimation of treatment effects: Systematic review and meta-regression analysis. JAMA 2010;303:1180-7.  Back to cited text no. 13
    
14.Briel M, Bassler D, Wang AT, Guyatt GH, Montori VM. The dangers of stopping a trial too early. J Bone Joint Surg Am 2012;94:56-60.  Back to cited text no. 14
    
15.Hsieh FY, Bloch DA, Larsen MD. A simple method of sample size calculation for linear and logistic regression. Stat Med 1998;17:1623-34.  Back to cited text no. 15
    



This article has been cited by
1 The Bioavailability of Various Oral Forms of Folate Supplementation in Healthy Populations and Animal Models: A Systematic Review
Jessica Bayes,Nitish Agrawal,Janet Schloss
The Journal of Alternative and Complementary Medicine. 2018;
[Pubmed] | [DOI]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
   Importance of Ad...
   Application of S...
  Conclusion
   References

 Article Access Statistics
    Viewed2676    
    Printed71    
    Emailed0    
    PDF Downloaded332    
    Comments [Add]    
    Cited by others 1    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]