Friday, June 20, 2008

Learning for 20 Jun 08

Today's focus is on the improvement process. After the conduct of survey, we need to do the following steps:

  1. Conduct a focus groups (size about 5)
  2. Transcribe interviews
  3. Analyse survey to surface gaps
  4. Design intervention (implement for 3 to 6 months)
  5. Conduct another round of survey to determine effectiveness of intervention
  6. Analyse trends for the various batch of students.

Considerations:

  1. Need to use right question to probe response during focus groups.
  2. If there is no difference in means for the expecation and actual surveys, need to ask why and what else?
  3. Need to inform the focus group respondents in advance that the interview is taped.

By going through the analysis of the class environment assessment, I become more sensitive to the information reflected from the data. Even insignificant differences between preferred and actual may indicate that the class environment is aligned with the particpants' expectation. However, it may also indicate that the survey was administered in the same sitting thus affecting the biasness of the data.

Another observation my team has is that the involvement mean is lowest score while the co-operation mean is the highest. Why is there this discpreancy? One reason could be that the participants are all senior staff so we are less expressive in class. In addition, the task given to the team before the survey analysis may not be challenging enough for the team to bond. If the actual survey is administered after the survey task, the mean for involvement will be higher.

Another learning is that Item Mean = (response for each questions/7)/24. While the Scale Mean = (response for each questions for the dimension)/24.

Learning for 19 Jun 08 pm session

This is a brain concussing session. We had a try with working in SPSS. At the moment, still not very sure how to interpret the results generated. We did reliability test for each of the scale to ensure that the survey is reliable. Next, we did the paired sample t test for preferred and actual survey to see if there is any difference in terms of expectation and actual situation. I noted that all our scales are reliable, and there is no difference between preferred and actual surveys mean. Why is it so? My take for this is that we did the surveys at the same sitting may create tendency to have similar ratings. Secondly, the lesson is structured to have collaborative and peer sharing, which this survey is suppose to measure.

Thursday, June 19, 2008

My Big Picture on Learning Environment


I have gone through the notes and activities for the past 4 days and consolidated the learning into a mind map for ease of reference.


Learning for 19 Jun 08 am session

The morning class on using of SPSS is mind boggling for me. I had been analysing survey data for the school for past 3 years without using t test or relaibility test. Now I know all these tests are required to ensure deviation seen in the survey results is significant or not. It dawned on me that I have been analysing survey result using my common sense and not following professional methods. I would need to recommend to my boss to purchase SPSS software and to send me for survey analysis course. I will have to come up with a structure on SPSS availability, and train the Wings' reps for analysis if I am going ahead with decentralising the analysis of pre and post course survey to the Wings.

Learning for 18 Jun 08

Today Dr Quek gives us a surprise by inviting James, Her graduate students for her Master Programme to share on using internet IT tools for learnnig. He shared on capturing of videos on firefox.com, getting freeware on snapfiles.com and organisation of blogs using google reader. He also encouraged us to use wetpaint.com to create wiki page as sharing platform for projects. This information is very new to me, as I consider myself as IT suaku. I mentioned all these learning to my colleague in the office and he marvelled at how knowledgeable I had become. I had never felt so young and generation Y before!

In terms of assessment instruments, we drill deeper into the details of examining and modifying WIHIC and Learning Environment instruments. We had also attempted interpreting the learning environment report of School A, B and C.

The main takeaways for these activities are:
1. Shuffle the question items of each scale to reduce biasness and increase trustfulness of respondents.
2. Negative items could be used to reduce predictability. Remember to reverse code the scores for these items. Note that negative items are used less in recent survey developments.
3. Modify question items by contextualizing the statement or reduce the number of scales or items within the scales.
4. Use response scale of 1 to 4 or 5. Scale that is more than 5 is only suitable for psychometric test. Another consideration is to use Likert scale of strongly agree, agree, neutral, disagree and strongly disagree. Other forms of response scale are open for different interpretation, will then need a description for the scale.
5. Use validated instruments and modify from there instead of re-inventing the wheel. This is because validated instruments had gone through factor analysis to ensure it measures what it intend to measure.
6. Survey only gives you a sense of learning environment, should use mixed approach of survey and focus group/interviews to get deeper insights into factors causing certain perception.

As mentioned in my reflection yesterday, survey is not just asking a few questions. The design requires more professional skills. The case studies discussed in class is relevant to me as I am reviewing the pre and post course survey for Air Force School. The case studies highlight the need for person to learning environment alignment; a performing or popular school may not be suitable for the child. Need to assess the school climate, leadership style and teachers' interaction. In addition, a cordial school climate may not be good all the time. An effective learning environment need to move the trainees out of comfort zone and to dis-stablise them.

I am looking forward to tomorrow's lesson on interpretation of collated data. Yoohoo for learning environment module!

Wednesday, June 18, 2008

Comments on Learning Environment Instrument

The first thing I noticed about the survey is that there is a student profile section to help in analysing the data. However, I have some difficulty in understanding the need for knowing the Father's/Mother's country of birth. I feel that this may not be a relevant question in our context since most of our students' parents are increasing from different parts of the world.

The 2 surveys are designed with similar scale to obtain response from the students' and teacher's perspective. This survey design will allow comparison of what the teacher's intent with the effect the lesson has on the students. This will provide a more comprehensive analysis on the effectiveness of learning environment from the 2 main stakeholders of the learning process. Another design of the survey is to compare the preferred and actual perception of the environment. This allows a comparison of expectation with actual situation. However, by having the preferred and actual response side by side in the survey may lead to some respondents' biasness. I was faced with this dilemma recently when my boss requested me to reflect the individual's pre-course survey response in the post survey form so that the trainees' have a baseline to refer to. The decision then was to provide the pre course response to the trainees so that the trainees will know how they have responded (6 months ago) and to accept some possiblity of biasness.

In terms of the 9 scales for the survey, I feel that they are comprehensive in coverage. However, I would want to make some slight amendments to the question items in Computer usage. With more use of web based learning in the internet, I would add questions on speed of retreival, ease of use of the webpage, availability of online forums and usage of shared documents.

Learning for 17 Jun 08

Today we have some heavy stuff on the various psychologists contributed to Learning Environment research. They are Walberg, Moos, Fraser, Lewin and Murray. My group did a presentation on Lewin which is facinating. He is the one that contributed on factors that influence a situation in terms of helping vs hindering factors. And these factors changes with age and experience as they are related to a person's motivation, values, needs, goals and ideals. In a way, Lewin's frameworks is based on his belief that learning is holistic and we have to view each trainee as a unique individual. This perspective is important, as in my "Commanding" management style, procedures and steps tends to be emphasized more than modifying the lesson to fit the individual. I have to keep reminding myself to focus more on the trainees and not on the lesson itself.

My take away for Moos, is that there are 3 schemes in assessment ie, relationship, personal development and system maintenance and system change. In an assessment, there should be some items on each scheme to be comprehensive. This is reflective of the "big picutre" diagrame on learning environment.

In my work, I had designed more than 2 questionnaires on pre and post course surveys and climate surveys without much knowledge of validation. My discussion with Dr Quek has enlightened me that survey design is not just putting together questions items and collating data. I must be well versed in the rationale behind the question items and that they are validated. With the various validated instruments introduced in the class, I will be more aware of the existing instruments out there and learn to modify validated instruments instead of re-inventing the wheels. The WIHIC instrument is very relevant to me at the moment as the school is reviewing the pre and post course survey on learning.