We also have a GPA requirement, so we have a 3.0 GPA requirement that goes along with it. And we're always looking for students who have demonstrated leadership experiences as well. So, that can be through an a student organization on campus. It can be through a volunteer organization that you are a part of. You can go out and start your own initiative if you're excited about that. We want to see that you're passionate about certain things and that you're going out and creating opportunities for yourself. So, practicing articulating that is huge, right? Can be all up in your head sometimes and you think you know it and then you come to a career fair and you're talking to someone and you get tongue tied. So practicing is very powerful.
But, but for everyone else, everyone who's listening, thank you so much. Please remember we're taking a little break from the podcast now, right? So the next episode won't be until the beginning of next school year, but the career center isn't taking a break. So anything you've heard us talk throughout the school year, you can still feel free to come on into our office, right? Interviewing internships networking, et cetera, getting ready for career fair or employer events. We, we are still a resource. We're here all year long, so you can always come out here and, and, and connect with us. Thanks again and feel free to re listen to all those episodes. It'll make us feel nice when our stats go up. All right. See you all next fall. Bye everyone. Thanks again for tuning in.
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making. After challenging the validity of these assumptions in real-world applications, we propose ways to move forward when they are violated. First, we show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited. Revisiting this limitation from a causal viewpoint we develop a more versatile conceptual framework, causal fairness criteria, and first algorithms to achieve them. We also provide tools to analyze how sensitive a believed-to-be causally fair algorithm is to misspecifications of the causal graph. Second, we overcome the assumption that sensitive data is readily available in practice. To this end we devise protocols based on secure multi-party computation to train, validate, and contest fair decision algorithms without requiring users to disclose their sensitive data or decision makers to disclose their models. Finally, we also accommodate the fact that outcome labels are often only observed when a certain decision has been made. We suggest a paradigm shift away from training predictive models towards directly learning decisions to relax the traditional assumption that labels can always be recorded. The main contribution of this thesis is the development of theoretically substantiated and practically feasible methods to move research on fair machine learning closer to real-world applications.
WASHINGTON - The U.S. Department of Housing and Urban Development (HUD) awarded today $66,234 to Piedmont Housing Alliance in Charlottesville in an effort to reduce housing discrimination. This funding is part of $38.3 million HUD awarded to 95 fair housing organizations and other nonprofit agencies in 38 states.
"Ending housing discrimination requires that we support the law of the land and protect the housing rights of individuals and families who would be denied those rights," stated HUD Secretary Shaun Donovan. "Ensuring and promoting fair housing practices lies at the core of HUD's mission, and these grants enable community groups all over the nation to help families who are denied equal access to housing."
Piedmont Housing Alliance (PHA) will use its grant to implement a coordinated fair housing education and outreach campaign that will include extensive and inclusive community education and outreach, an accessibility compliance initiative, and complaint intake and compliance information. Specific activities will include general and targeted education sessions to raise public awareness about housing discrimination, fair housing rights and responsibilities, and equal housing opportunity for the general public and underserved populations and a fair housing forum. Fair and accessible housing education sessions will focus on fair housing for people with disabilities. PHA will participate in Livable for a Lifetime, a leadership group to promote accessibility, visitability, and universal design. Other community outreach will consist of meetings with community partners; print, radio, and TV advertisements; articles; website/social media; outreach to Latino and African-American organizations and communities; community event displays; and regional advisory group meetings.
There was meeting on voting rights closed to the press. But she told the pool that protecting voting rights for free and fair elections were important for the right kind of information to go out. "There are efforts to weaken" voting rights.
Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race. To avoid disparate treatment, sensitive attributes should not be considered. On the other hand, in order to avoid disparate impact, sensitive attributes must be examined, e.g., in order to learn a fair model, or to check if a given model is fair. We introduce methods from secure multi-party computation which allow us to avoid both. By encrypting sensitive attributes, we show how an outcome-based fair model may be learned, checked, or have its outputs verified and held to account, without users revealing their sensitive attributes. 041b061a72