Refocus this section to specifically address the following:
“Check the details about the randomization procedure used for allocation of the participants to study groups. Was a true chance (random) procedure used? For example, was a list of random numbers used? Was a computer-generated list of random numbers used?” Then use citations from your text to possibly discuss threats to internal validity if there is no random assignment. I would not include inclusion and exclusion criteria here as it is less relevant.
There was multisite randomization of the participant groups participating in the study. The first was a control group, receiving standard care without washout intervention (Moore et al., 2009). The second group received saline washout weekly, with 50 ml saline (Moore et al., 2009). The third intervention group received weekly catheter washout using 50 ml sterile Contisol. The study participants were sampled purposively, and those who met the inclusion criteria were interviewed and recruited. According to Moore et al. (2009), those who met the inclusion criteria were randomly assigned to their treatment groups using a computer-generated list with numbers 1 to 120. The inclusion and exclusion criteria ensured the internal validity of the study (take out the part about this “ensuring” the internal validity of the study as this assumption is too strong). The factors that might have affected the study results, such as UTI infection, were part of the exclusion criteria (Moore et al., 2009). The study ensured external validity through multisite randomization (Mohajan, 2017). Therefore, the results can be generalized to a broader population. The research indicates high internal consistency since the individual participants produced similar results as per the interventions given (Mohajan, 2017). Take out any statements including the degree of validity, reliability etc. as there is no basis to judge this, and instead include statements such as increases validity or minimizes risk etc.
Was allocation to treatment groups concealed?
The participants were allocated to their respective intervention groups randomly using computer-generated numbers. The numbers were placed on sealed, opaque envelopes sequentially. Each group was blinded by the intervention they were receiving. However, the nurses administering the interventions could not be blinded “due to the nature of the sterile packaging” (Moore et al., 2009). Efforts were made to ensure the participants were not aware of the solutions used for their interventions until the study ended. Those who did the allocation did so blindly; therefore, there was no risk of deliberately allocating participants in groups they might have preferred. As such, the results were not influenced by the allocation or distorted in any way. From this assessment, in terms of reliability, the study demonstrated high internal validity, since there was no manipulation of variables to produce a specific result (Watson, 2015). (These two previous sentences are assuming too much).
Were the treatment groups similar at the baseline?
This response needs to be refocused in order to specifically touch on the following:
Check the characteristics reported for participants. Are the participants from the compared groups similar with regards to the characteristics that may explain the effect even in the absence of the ‘cause’, for example, age, severity of the disease, stage of the disease, co-existing conditions and so on? Check the proportions of participants with specific relevant characteristics in the compared groups. Check the means of relevant measurements in the compared groups (pain scores; anxiety scores; etc.)
⦁ The authors provide the description that the groups were not significantly different at baseline. However, whether the groups were equivalent in terms of their blockage and encrustation history is not discussed. Encrustation and blockage were described as pertinent variables in this study- but it was not discussed whether participants were equally matched when it comes to this.
Study participants were purposively identified, and their consent was obtained before recruitment into the study. They all met the standard inclusion criteria for the study, thus had similar characteristics. According to Moore et al. (2009), all participants underwent catheter insertion at the baseline, and data was collected. “They were then assessed every week for eight weeks, until the end of three catheter changes, or a UTI was established.” The study ensured there were no differences among the participants that would threaten the internal validity. Based on this assessment, the study has a high inter- ratter reliability as different researchers can use the same group to produce similar results. Watson (2015) posits that a study is characterized by high inter-ratter reliability when consistent results are produced when different people conduct the same study. This heavily relies on the treatment and the characteristics of the participants at the baseline.
Were the participants blind to treatment assignment?
The study indicates that “every attempt was made to keep the participant unaware of which washout solution was used until the end of the study” . This indicates an attempt to blind the participants, but whether or not the attempt was successful is unknown.
The participants were blinded from the treatment allocation. The study reports that nurses administering the intervention ensured that the study participants were unaware of the interventions (Moore et al., 2009). They were blinded from knowing the washout solution used until the end of the study. Therefore, there was minimal risk of distortion of results due to participant behavior change (Take out the previous sentence, because we do not know what the true risk is). According to Claydon (2015), “the extent to which a measurement corresponds to other valid measures of the same concept indicates high criterion validity. This is confirmed in a study by Shepherd, Mackay, and Hagen (2018), which established the effectiveness of catheter washout solutions for long-term urinary catheterization in adults. (The previous sentence is irrelevant to what is being asked regarding blinding.) I would include more citations from your text regarding the effects of blinding and the risks of participant awareness.
Were those delivering treatment blind to the treatment assignment?
The nurses delivering the study interventions were not blind to the treatment assignment. According to Moore et al. (2009), the report reveals that “It was not possible to blind the research nurse to the 2 washout solutions because of the nature of the sterile packaging.” Since the nurses were aware of the participant’s allocation and the treatment they were being given, there is a high (not necessarily a “high” risk) risk of distortion of results. This is due to behavioral change among the nurses that might have been there. They may have accorded (treated instead of accorded?) the intervention group different treatment from the control group. Based on this observation, the study risks low inter-rater reliability since the results would have been different if the nurses were blinded to the treatment assignment. (Take out the part regarding inter-rater reliability as this is not indicated. There may have only been one nurse delivering the treatment)
Were the outcomes assessors blind to the treatment assignment?
The study does not indicate whether there were outcomes assessors. This area is, therefore, unclear. There were outcomes assessors in the study (those that performed the analysis or “assessed the outcomes” but I would agree that there is no information in regards to whether they were blind to treatment assignment in this study. Unless, however, the “nurse researcher” was the same individual assessing the outcomes- in which case her or she was not blind to the treatment assignment.
Were treatment groups treated identically other than the intervention of interest?
Revisit the following:
“Check the reported exposures or interventions received by the compared groups. Are there other exposures or treatments occurring at the same time with the ‘cause’? Is it plausible that the ‘effect’ may be explained by other exposures or treatments occurring at the same time with the ‘cause’? Is it clear that there is no other difference between the groups in terms of treatment or care received, other than the treatment or intervention of interest?”

⦁ This is reiterated in question #10, but is also relevant here. The control group was described as receiving “usual care”, but what constitutes “usual care” is not described. In the study’s literature review, the authors also state that it is customary or “usual” for nurses to perform “saline washouts”.

The study does not outline any different treatments accorded to the various participant groups. It only states that treatment was issued to the intervention groups as per the manufacturer’s directions (Moore et al., 2009). To attribute the ‘effect’ to the cause’ in the study, participants who developed symptomatic UTI before the end of the study or those who commenced antibiotic treatment for suspected UTI were withdrawn from the study. That way, exposure to other treatments and arising illnesses did not impact the results of the study. Therefore, the internal validity and reliability of the study were high (Take out the last statement about internal validity and reliability of study being “high”).
Was follow up complete and if not, were differences between groups in terms of their follow up adequately described and analyzed?

Follow-up was incomplete (there was post-assignment attrition), so your responses needs to focus on the degree of this study’s “exploration of post-assignment attrition (description of loss to follow up, description of the reasons for loss to follow up, the estimation of the impact of loss.” Answer the following:
⦁ “Examine the reported details about the strategies used in order to address incomplete follow up, such as descriptions of loss to follow up (absolute numbers; proportions; reasons for loss to follow up)”.
⦁ The study provided a description of the incomplete follow up. Absolute post-assignment attrition numbers were discussed in the results and further detailed in Fig. 3, as well as the reasons for loss to follow-up (the reasons are listed in the flow chart AFTER randomization)
⦁ “Check if there were differences with regards to the loss to follow up between the compared groups.”
⦁ Study indicated that “there were no group differences between terminated participants.”
⦁ Did the study provide an estimate regarding the impact of the loss of subjects to attrition? i.e. (if there ARE differences between groups with regards to the loss to follow up (numbers/proportions and reasons), was there an analysis of patterns of loss to follow up?
⦁ Since the study indicated there were no differences between groups with regards to the loss to follow up, there was no analysis of patterns or impact of the loss to follow up on the results.
Other potential citations to consider for this response:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5109701/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3427970/

The follow up was complete. As recorded, a good number of participants (the study indicated concerns for low power due to small sample size, so would delete this) complete the study. Some of the reasons for withdrawal include death, contraction of UTI, latex allergy, and catheter blocking. The research has demonstrated complete knowledge of the randomly allocated participants. In this study, the post-assignment attrition accumulated to a percentage of 20.40%. (how was this % calculated – I don’t see where this was calculated in the study, so I don’t think I would include it, if you do you need to describe how you calculated it and cite the relevant literature) Post-assignment attrition is the description of loss to follow up, description of the reasons for failure to follow up, the estimation of the impact of loss to follow up on the effects etc. Nunan, Aronson and Bankhead (2018) establish that more than 20% attrition poses adverse threats to study validity. There is a risk of significant bias and manipulation of study conditions when some participants leave. Therefore, the reasons for participants’ withdrawal should be thoroughly examined and acted upon accordingly.
Were the participants analyzed in the groups to which they were randomized?
The participants were analyzed in the groups to which they were originally randomized. This is the intention-to-treat method of analysis that allows the researcher to draw unbiased conclusions regarding catheter washout effectiveness in extending the patency time in long-term indwelling catheters. “The method of analysis preserves the prognostic balance afforded by randomization” (McCoy, 2017). As a result, the risk of bias is significantly mitigated in this analysis, thereby contributing to the study’s high validity (there are some validity issues, so I would take out the part regarding “high validity” and focus on something to the effect of “Applying the intention-to-treat principles yields an unbiased estimate of the efficacy of the intervention on the primary study outcome at the level of adherence observed in the trial” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5654877/. The ITT analysis revealed that catheter management is still a challenge for both patients and the registered nurse. After the analysis, the study could not establish a standard catheter washout routine for patients with recurrent blockages. The study by Moore et al. (2009) concludes that the most effective approach is individualized catheter changes depending on the patient’s frequency of blockages instead of a prescribed regimen. I would take the last 3 sentences out that involve interpretation and stick to writing more info regarding ITT such as considering any info from the following additional citation: “An important issue to consider in the analysis of a RCT is whether the analysis was performed according to the intention to treat (ITT) principle. According to this principle, all patients are analyzed in the same treatment arm regardless of whether they received the treatment or not. Intention to treat (ITT) analysis is important because it preserves the benefit of randomization. If certain patients in either arm dropped out, were non-compliant or had adverse events are not considered in the analysis, it is similar to allowing patients to select their treatment and the purpose of randomization is rendered futile.” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3168054/
Were outcomes measured in the same way for treatment groups?
Address the following in clearer terms for this response:
⦁ Same instrument or scale used? Same measurement timing? Same measurement procedures and instructions?
⦁ Intervention was adequately described with diagraming of “manufacturers’ directions”. These directions were described as being adhered to throughout the duration of the study. Outcome measures appeared to be justified from the literature review and were clearly stated. Assessments were described as being performed weekly including….. A concern, however, is the fact that the control group was described as receiving “usual care”, but what constitutes “usual care” is not described. In the study’s literature review, the authors also state that it is customary or “usual” for nurses to perform “saline washouts”.
Outcomes of the trials were measures in the same way for all the three treatment groups. A common scale of measurement was used, the Kaplan Meier curves, to measure the “survival” time of the first catheters in every treatment group. This revealed a small difference in the survival times of the catheters for each group. Being an instrument that has been used in several randomized controlled trials in different settings, it is highly reliable. (The Kaplan-Meier survival curves are relevant to the statistical analysis in terms of the time to first catheter change, so let’s take this info out of this response) An equal amount of time was set for the interval of changing catheters for each treatment group. As a result, the internal validity of the results was assured. Similar procedures were employed, i.e., the washout solutions used for each treatment group were different in composition but similar in amount. The intervention was the use of a 50 ml washout solution for all the treatment groups. The use of standard procedures for this experiment has contributed to the measurement scales’ high internal validity and reliability (We can’t assume high internal validity and reliability and have to focus on what the study is saying).
Were outcomes measured reliably?
For this response reconsider the following:
⦁ Check the details about the reliability of measurement such as the number of raters, training of raters, the intra-rater reliability, and the inter-raters reliability within the study (not as reported in external sources).
⦁ We cannot assume this info and we cannot use data as reported in external sources. Only state what is already stated in the study when it comes to this. Information regarding the psychometric quality of the study’s measures were not included.

Article Critique