Jump to content
IGNORED

Statistics help wanted! Help me finish my masters! Anyone into this stuff?


Recommended Posts

Just tying my luck here! Anyone really good at research on statistics? I'm looking for some help to help me calculate effect sizes or something comparable so I can properly compare the results of 12 selected research papers.

 

My issue is that I've got comparable effect sizes for 4/12 (a good thing) without needing to calculate them myself from the available data in the articles but..the remaining 8 either don't present enough data for me to calculate effect sizes or use another system, namely McNemar test.

 

I'm horrendous at using SPSS and don't really know where to go from here but unless I present results in one universal way, I don't know how I can interpret things.

 

Thanks!

my statistics are pretty basic coming from sociology, but still, what kind of models are you dealing with?

in any case hit up the statistics reddit and https://stats.stackexchange.com/. way more relevant traffic and chance of getting a serious answer.

Thank you! Will do.

Yeah, it's speech and language therapy single case studies comparing a certain type of treatment efficacy.  So at its simplest it's pre/post treatment scores as per some form of assessment. 

  On 7/27/2019 at 7:40 PM, Polytrix said:

Yeah, it's speech and language therapy single case studies comparing a certain type of treatment efficacy.  So at its simplest it's pre/post treatment scores as per some form of assessment. 

ok, that's way off from what i'm familiar with, but do drop a link to one of those papers.

spss is dogshit btw. if you're not too invested in it already and if your environment doesn't force you to, do switch to something sensible like stata (proprietary but very neat and complete package) or R if you plan to go balls deep into advanced quantitative stuff.

I'm writing the methods section to a systematic review and I want to mention my approach to comparing Treatment Outcomes and Clinical Efficacy.

The title of the systematic review is: A systematic review into the efficacy of interventions utilising semantics in the treatment of bilingual Aphasia.

So basically I'm reporting on 12 selected research studies and the efficacy of a specific intervention type as mentioned in these studies.

4/12 articles report effect sizes which I can compare and interpret from established criteria but the remaining 8 either report McNemar X2/report ANOVA/T-test results/simply report pre/post intervention % score differences.

My thought at this stage is to try and use the data available in the non-effect-size reporting articles/convert other measures so I have 12 effect sizes to compare...problem is, I have no idea how to do this and don't know if I have sufficient data available to me to even do this as the research papers when not reporting effect sizes don't give me mean data in two groups and don't provide me with standard deviation scores. So I don't have data sets to work with.

What should I do?

 

Posted that elsewhere, sorry for white text situation!! ?

yeah i get what you're trying to do conceptually, i just want to see what kind of models and methods are used in those papers you rely on, so drop a link to one of them.

Thanks man. Just for further information, I'm following the methods section of two similar systematic reviews (extracts below) so my thinking is kind of based on what they've detailed as to be logical when facing this challenge (i.e. different research papers reporting efficacy in different statistical ways): 

 

So one article giving me good data in an effectsize way for what I'm interested in would be this: 

 

https://www.researchgate.net/profile/Swathi_Kiran3/publication/240040890_Semantic_feature_analysis_treatment_in_Spanish-English_and_French-English_bilingual_aphasia/links/55ece3e108ae21d099c743a7/Semantic-feature-analysis-treatment-in-Spanish-English-and-French-English-bilingual-aphasia.pdf

 

Then there are a selection of articles giving what I think to be effect size data in a different way (McNemar), like this one: 

https://www.researchgate.net/profile/Monica_Norvik/publication/318740877_Cross-linguistic_transfer_effects_of_verb-production_therapy_in_two_cases_of_multilingual_aphasia/links/5d132996299bf1547c7f58df/Cross-linguistic-transfer-effects-of-verb-production-therapy-in-two-cases-of-multilingual-aphasia.pdf

 

And then there are less useful articles reporting efficacy in ways like this:

https://www.researchgate.net/profile/Lisa_Edmonds/publication/6878417_Effect_of_Semantic_Naming_Treatment_on_Crosslinguistic_Generalization_in_Bilingual_Aphasia/links/09e4150b65ea2decb3000000/Effect-of-Semantic-Naming-Treatment-on-Crosslinguistic-Generalization-in-Bilingual-Aphasia.pdf

 

1 - Each study was examined for the question(s) which it addressed and relevant pre- and post-therapy data were extracted.We computed statistical significance for the pre- and post- treatment scores using the McNemar’s change test (p < 0.05, Seigel & Castellan,1988).3 There were two primary reasons for performing the statistical computations. Some studies failed to report any statistical measure (e.g., Faroqi & Chengappa, 1996; Gil & Goral, 2004; Khamis, Venkert-Olenik, & Gil, 1996). A few other studies reported parametric statistical tests whose assumptions of normality and independence were not met by the study design and data. McNemar’s change test is a non-parametric test for paired nominal measures such as accuracy data, and has been used by several aphasiologists to compute statistical significance of treatment-induced changes in behavioral scores (Faroqi-Shah, 2008; Rochon, Laird, Bose, & Scofield, 2005). The use of a consistent statistical measure makes comparisons of statistical significance across studies more valid.

2 - As well as describing the treatment outcomes of included studies, the clinical efficacy of SFA was determined by calculating effect sizes. Effect sizes could be calculated only in those studies that reported sufficient data. To calculate, it was necessary to determine the individual values for the pretreatment and posttreatment phases for each set of trained items. Cohen’s d statistic was used to calculate effect size as described by Busk and Serlin (1992). The magnitude of change in performance was determined according to the benchmarks for lexical retrieval studies described by Beeson and Robey (2006). The benchmarks were 4.0, 7.0, and 10.1 for small, medium, and large effect sizes, respectively. Where Cohen’s d could not be calculated, the percent of nonoverlapping data (PND) was calculated. PND is the most widely used method of calculating effect size in single case experimental designs (Gast, 2010; Schlosser, Lee, & Wendt, 2008). PND is the percentage of Phase B data points (the treatment phase) that do not overlap with Phase A data points (baseline or no treatment). To determine the magnitude of effect, benchmarks put forth by Scruggs, Mastropieri, and Casto (1987) were used. PND scores higher than 90% were considered to demonstrate a highly effective treatment, PND of 70%–90% were interpreted as a moderate treatment outcome, and PND scores of 50%–70% were considered a questionable effect. PND scores less than 50% were interpreted as an ineffective intervention because performance during intervention had not affected behavior beyond baseline performance.

yeah, this is way beyond me bro, never dealt with something like that. though i do i see that both the first article you linked and the third one (which doesn't bother with effect size metric at all) report the post treatment accuracy scores in % and whether the treatment was significant, so isn't it enough for proper comparison?

I suppose I could go off significance values yes but that's not really a measure of efficacy I think. 

 

I want to be able to say that the approach taken in article X/Y/Z was more less effective than the approach taken in article XYZ.

 

Not sure maybe I'm just getting confused about al this. Cheers all the same dude.

i meant comparing the post treatment scores, not significance leves, it seems that both paper report them identically and thus possibly comparably.

hmm maybe. Just not sure if that's a measure of change tho...will have more of a think tomorrow. Thanks dude.

How can a report not publish effect size? Isn't that like the main thing one wants to put out there? It's even secondary to the significance score.

Anyway, one should be able to infer the effect size if you know all of the pvalue, sample size and variance (if applicable)

ZOMG! Lazerz pew pew pew!!!!11!!1!!!!1!oneone!shift+one!~!!!

Haha go for it.

I think it's because these are experimental single case studies and aren't always trying to be super analytical. Just makes life harder for me tho :( thanks for help people. I plan to calculate effect sizes when I can from the data or just use what I've got in a more general way.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   1 Member

×
×