Former SLCC Students to Review Social Research
|
|||||
|
||||
Notes on Social Research by Dwight L Adams Science is constantly researching and retesting our current body of knowledge to try to find better conclusions. This is
taking small, careful steps towards "truth." But even today's accepted scientific methodology is not accepted by all who call themselves scientists. One school of thought,
as found with the ethnomethodologists, feel that bias is inherent in the current way of doing research, and they want to change
the program.
As a partial evolution from this group, the relatively new "Ecological Paradigm" has been born.It seems to somewhat combine
the empiricists with the ethnomethodologists and add some new ideas. We will hold further discussion of this change until
later.
Suffice it to say that "doing science" has changed and most likely will continue to change. What we will cover in this
course is the most prevalent methodology of our day. Deductive vs Inductive Reasoning
When doing science, the choice of using descriptive (inductive) or deductive (quantitative) reasoning on a given project
is basic to the planning of the research. Inductive reasoning, which often is called "qualitative or discriptive research," is used when science embarks on a pnenomenon
that is new or one that has had little or no previous work done on it. Simply put, there is not enough in the literature to
begin doing deductive research; there is not enough knowledge to begin to ask the pertinent questions the researcher wants
to ask. A major methodology in descriptive (inductive) research is field work or observation research (which may be participant
observation or non-participant observation and overt observation or cover observation). The idea is to get into the field
and find out what is going on, without taking in preconceived notions of what will be found. This method has been used often in anthropology and, by trial and error, many streghths and weaknesses have been uncovered.
Because most textbooks on social research cover this methodology only lightly, this guide will emphasize it through both readings
and a project. It can be a valuable tool in your future. Once a specific theorectical perspective is chosen, the hypotheses for this particular study can be decided. Not only should
the hypotheses be in harmony with the theory, but it should also be stated as a "null hypotheses" in compliance with Popper's
propostions.
The researcher may feel that there is a difference in the income potential between males and females (his hypothesis),
but he must try to disprove himself, so he states the hypotheses as: "There is no difference between the income potential
of college trained males and college trained females working at the same job." This statement is the "null hypothesis"
(opposite of what he believes) and helps him try to prove himself wrong (Poppers' Proposition). Please note that the hypothesis needs to be quite specific. The one just given needs to be further specified to make a
good study with reliable findings; the next step helps this to happen. The actual research questions need to harmonize with
the hypothesis and verification to others for additional study. 1- Intersubjective: Even though no two scientists are exactly alike with regards to subjective orientations, they still
would arrive at the same conclusion upon doing the same experiment. 2- Open to modification: science must be ready to accept
revision and change, since science is a process of trial and error--no single research design will provide the "ultimate answer." Though it might seem odd at this early point of the lesson, I give below a scientific journal editor's viewpoint of how
a good paper should be reviewed and what, according to the scientific community, it should contain. From the "Journal of Marriage and the Family," Alan Booth has written "Hints for Reviewing Manuscripts:" "Writing constructive
reviews is something that most of us learn by writing reviews and seeing how others review our work. Over the years I have
developed some ideas on the nature of a good review. I have listed some of these ideas below. The inexperienced reviewer may
find them a useful guide and the old hand may find them a helpful checklist of things to take into account in arriving at
a judgement about the manuscript. 1) Review should be a constructive process. The author should not feel insulted or put
down from reading the review 2) The author should learn something from the review and have some idea of what to do to improve
the manuscript, even if it doesn't warrant publication in JMF 3) Find something positive about the manuscript, even if it
is only a compliment about the author's judgement in selecting a topic 4) Try to be comprehensive in pointing out strengths
of the manuscript as well as its deficiencies "Specific Questions-- 5) Does the paper build on prior research, take into account all of the directly relevant work,
and add something new? 6) Are the hypotheses explicitly and clearly stated? 7) Is the method of study appropriate to the
problem? 8) If the work is a data based article: a) Is the sample large enough and representative? b) Are the measures
appropriately constructed and directly?" ------Types of Research:
To find out how people behave: Watch them! Observation can be Overt or Covert To find out what people have done: Read up on it! Existing data (includes media and done prior research studies) To obtain reliable information under controlled conditions: Test Them! Experiment in a laboratory To find our what people think: Ask them! Surveys and Interviews To assess the effects of social intervention: Conduct a Quasi-Experiment! Usually a Field Experiment ------------------- Scientists have to be wary of their own subjectivity and ethics when choosing a method mentioned above. Kaplan came up
with the "Law of Instrumentation" that says "too frequently we tend to define things to be covered under a certain method
that we are comfortable with." We might tend to define the case as an irrational being with little value to our method of
finding truth, when, in actuality, it may be that she is being the embodiment of the real truth we seek. Our strict methods
may blind us to the facts of the case! Dr. Dale Lund,(University of Utah) in commenting on this common researcher ------Evaluative or Program Research Projects.
Working with people in an "auditor" way, in a way that is designed to "criticize" their work or program, is usually a delicate
matter. The information obtained can be very important to an organization, yet they shy away from employing an evaluator or
from listening to the evaluation. -----A Social Impact Assessment is a part of the larger Environmental Impact Assessment
process. Herein lies yet another career possibility. There are 5 types of social impact that are studied: 1) Economic: changes in business activity, jobs, employment, personal income, and in the economic "base" of the community.
2) Demographic: changes in population (not just local: also regional) and in population characteristics (gender ratio,
age differential) 3) Fiscal: changes in public costs (ex: school districts tax base) 4) Community Service: changes in demand, distribution and quality of public services 5) Social: changes in community organizations, perceptions, lifestyles and life satisfaction. Especially changes in specific
groups such as the elderly, minorities, or other sub groups. As these assessments are mandated by law, jobs in these areas can be found. Yet the same regulations require the study
to be completed in a given time frame, so use of existing data is necessary (you don't have time to do a lot of planning of
new research). -------Field Research. This is time consuming but very rich in detail--you may discover things that can not come out in
any other research design. This type of field work is usually overt (the research subjects know that you are there and have
asked permission to do research on them). With covert research, you may be seen in person by the subjects, but your intent of doing research is unknown to them.
Another method is to be around the subjects, but not be noticed by them. This might be in research done as an "ancillary person"
(such as a waiter who presumably does not -----------
Types of Data: a) nominal or categorical-- Example: Male of Female b) Ordinal: adds "greater than or lesser than" such
as: Choose one-- strongly agree, agree, nuetral, disagree, and strongly disagree) c) interval, which further adds a KNOWN
distance between choices. An example is age--you have younger through older AND know the distance between them in years d)
ratio, which further adds a KNOWN zero point. Example: what ages did you start and end with (you cannot assume you started
with new-borns, so you need to tell me the beginning point of you age measurements). With that known starting point, you can
successfully say your factors in ratio (like group A averaged 2:1 in age over group B). You must run the right type of variables against each other to get accurate results. Example: since male/female is nominal
(no greater than/lesser than) it usually is used as "preceeding" variable: due to gender, other varibles (like difference
in income) follow. It would not be true that earning a certain income would make you, therefore, a male or a female. ANOVA (Analysis of variance) is another valuable stat that you can run on a computer. This stat concerns itself with differences
that are found BETWEEN groups and also the differences found WITHIN groups. Going back to our U professors and their income
study, you may have asked a question: perhaps there is a difference BETWEEN genders on over-all income, but is the magnitude
of that difference more or less than the difference in income WITHIN a gender group?
Perhaps there are a few of one gender that earn an astronomical salary while most of that same gender earn only modestly,
but using the aggregate data, it appears the AVERAGE prof in that gender does pretty well in comparison to the average prof
in the other gender. You can tease this out with ANOVA. We would need to state the possible values that the computer needs to consider and,
like we did in the crosstabs, we can also include the department (this time it is not being used as a control, we just want
to see income as a variable against departments (a second question: are there greater differences BETWEEN or WITHIN departments?).
We would want to use the exact dollar amounts of incomes. Make a graph and start placing the grades in high school and in college for each of your 20 peers on the graph. It is very
likely that those who got "As" in high school also get As in college, those who got "Ds" in High School also did in college,
and so forth. There WILL be some variation to that generalization, but it would be unusual for an "A: college student to have
gotten a "D" average in High School. This is such a predictor of future performance that colleges use this stat in admissions! As you stare at the graph we have created mentally, you note that the dots on the graph seem to conform, in a very general
way, to a straight line. The SPSS package is able to estimate this straight line and it's incline on our graph by comparison
of all 20 data sets. It gives us a reading called a correlation coefficient. Think of this as a percentage number. If the line shows a "perfect" correlation (the line would go at a 45 degree angle
from the bottom left hand corner to the top right hand part of the chart), that is every one who got "As"in High School also
gets "As" in college. This has been a very quick overview of some of the stats you can generate from stats programs. Of course there are advanced
stats that go beyond this AND there are many stats packages. The final part of your research is the ethical requirement to publish your findings so that others can benefit, replicate,
and build further in the inquiry for truth. You may deliver your findings at a scientific conference or more simply in front
of a PTA group. Your findings may be published by a scientific journal or may find their way into a textbook.The important
thing for you to remember is to gear the presentation to the given audience. This statement does not imply that you change the meaning of the output of your data. You should not change the interpretation
of the findings to be popular with the audience. It does mean that the presentation should highlight those findings and their
interpretation that would be most meaningful to the listener or reader. For an example, another scientist would want a detailed rendering of the sampling techniques and the exact instrument used
(among other things) to see the validity and reliability of your conclusions. When delivery is made at a scientific conference, expect the normal criticism and don't become defensive. The scientific
method is, as you know, to try to DISPROVE everything, so it is natural that you will come under fire. You will have submitted your work about 3-6 months before the conference to get accepted But that is not all that happens!
Your paper will not only be reviewed, but one or more people will prepare to challenge your work on any scientific grounds
they can. They will challenge you right there in public, in front of your peers. You, knowing that this is the normal way of things,
need to prepare to meet any challenge that may be raised. That is the gist of your presentation: you present, they fire at
you, you answer the criticisms until the time runs out or the criticisms end. If your work has not been DISPROVED from a procedural
and scientific point of view, we'll let it stand (for now) to become yet another brick in our stairway towards truth. Good hunting in your quest for truth. |
||||
Enter content here |
||||
Enter content here |
||||
Enter content here |
|
||
site search by freefind |