Your scenario resonates with me very much because I found myself in a similar position a few years ago. I was in charge of the TBGU placement test, an instrument that was designed to identify at-risk students upon matriculation to the university. There were serious issues with that placement instrument, and I ended up redesigning it completely. I found the change process tedious but ultimately quite easy because I was lucky in that the head of the committee that looked after at-risk students was sympathetic to my rationale and–importantly–not an expert in testing. She teamed me up with another sympathetic and influential professor who listened to my explanation of the issues and techniques to overcome them. However, the main reason I had my changes implemented was due to me volunteering to do the work myself. If the people around me has not been sympathetic or were comfortable with the then current situation, I don’t suppose my efforts would have resulted in any change.
A few issues that I had to deal with my be of interest to you. The act of identifying at-risk students was felt by some on the committee to be potentially undemocratic, that the egalitarian principle of equal education would be undermined. Allied to this was the notion that students who were labelled at-risk would be stigmatised or marginalised. I had to convince the committee that I had considered these issues before much could be done. My method was two-pronged: two demonstrate the potential value of remedial education to the professoriate itself (i.e. How their own classes would run smoother after streaming), and how marginalisation could be mediated through streaming at multiple levels, not just a binary at-risk and not at-risk.
Another major issue that may also be relevant to your situation concerns the actual instrument for identifying at-risk students. In TBGU’s case, the older method was a standardised grammar-based language test. The upshot of this was that many high-scorers were placed in high level communicative classes where their lack of communicative skills negatively influenced their achievement in the course, and vice-versa. My method was to interview the teachers about the general content type of their courses and from that information create a specialised test that would reflect the actual content. The assumption here was that streaming would be more targeted and therefore more successful. I conducted rigorous Rasch analyses on pilot versions of the instrument before its eventual use as a placement test. The ratio of misplaced students fell from 24% to 8%, and teachers were happier with the process. In your case, who directs the metrics and who analyses them? I get the impression from your post that on-the-ground teachers’ input is limited in this regard.
You ask some pertinent questions, notably about the capabilities of ADU leadership. May I ask, slightly tongue-in-cheek, that if they are found to be incapable, what would happen then?
I agree with your assertions that context is everything, and that the importance of co-ordinated action cannot be overstated. In my case, I was lucky to have two sympathetic and influential supporters. I hope that they think that they were lucky to have someone willing to undertake the task. But I am well aware of the need to present ideas well, irrespective of how strong the ideas are. Even a great idea may fail if the wrong person proposes it in the wrong way.
P.S. Please forgive the lack of references. I’m on a long-distance bus, and doing referencing on an iPad-mini is a nightmare.