cmu.wd5.myworkdayjobs.com/SEI/job/Pittsburgh-PA/AI-Security-Researcher_2020107
Preview meta tags from the cmu.wd5.myworkdayjobs.com website.
Thumbnail
Search Engine Appearance
AI Security Researcher
Are you a cybersecurity and/or AI researcher who enjoys a challenge? Are you excited about pioneering new research areas that will impact academia, industry, and national security? If so, we want you for our team, where you’ll collaborate to deliver high-quality results in the emerging area of AI security. The CERT Division of the Software Engineering Institute (SEI) is seeking applicants for the AI Security Researcher role. Originally created in response to one of the first computer viruses -- the Morris worm – in 1988, CERT has remained a leader in cybersecurity research, improving the robustness of software systems, and in responding to sophisticated cybersecurity threats. Ensuring the robustness and security of AI systems is the next big challenge on the horizon, and we are seeking life-long learners in the fields of cybersecurity, AI/ML, or related areas, who are willing to cross-train to address AI Security. The Threat Analysis Directorate, is a group of security experts focused on advancing the state of the art in AI security at a national and global scale. Our tasks include vulnerability discovery and assessments, evaluation of the effectiveness and robustness of AI systems, exploit discovery and reverse engineering, and identifying new areas where security research is needed. We participate in communities of network defenders, software developers and vendors, security researchers, AI practitioners, and policymakers. You'll get a chance to work with elite AI and cybersecurity professionals, university faculty, and government representatives to build new methodologies and technologies that will influence national AI security strategy for decades to come. You will co-author research proposals, execute studies, and present findings and recommendations to our DoD sponsors, decision makers within government and industry, and at academic conferences. The SEI is a non-profit, federally funded research and development center (FFRDC) at Carnegie Mellon University. What you’ll do: Develop state of the art approaches for analyzing robustness of AI systems. Apply these approaches to understanding vulnerabilities in AI systems and how attackers adapt their tradecraft to exploit those vulnerabilities. Reverse engineer malicious code in support of high-impact customers, design and develop new analysis methods and tools, work to identify and address emerging and complex threats to AI systems, and effectively participate in the broader security community. Study and influence the AI security and vulnerability disclosure ecosystems. Evaluate the effectiveness of tools, techniques and processes developed by industry and the AI security research community. Uncover and shape some of the fundamental assumptions underlying current best practice in AI security. Develop models, tools and data sets that can be used to characterize the threats to, and vulnerabilities in, AI systems, and publish those results. You will also use these results to aid in the testing, evaluation and transition of technologies developed by government-funded research programs. Identify opportunities to apply AI to improve existing cybersecurity research. Who you are: You have a deep interest in AI/ML and cybersecurity with a penchant for intellectual curiosity and a desire to make an impact beyond your organization. You have practical experience with applying cybersecurity knowledge toward vulnerability research, analysis, disclosure, or mitigation. You have experience with advising on a range of security topics based on research and expert opinion. You have familiarity with implementing and applying AI/ML techniques to solving practical problems. You have familiarity with common AI/ML software packages and tools (e.g., Numpy, Pytorch, Tensorflow, ART). You have knowledge or familiarity with reverse engineering tools (e.g. NSA Ghidra, IDA Pro) You have experience with Python, C/C++, or low-level programming. You have experience developing frameworks, methodologies, or assessments to evaluate effectiveness and robustness of technologies. You have superb communication skills (oral and written), particularly regarding technical communications with non-experts. You enjoy mentoring and cross-training others and sharing knowledge within the broader community. You have BS in machine learning, cybersecurity, statistics, or related discipline with eight (8) years of experience; OR MS in the same fields with five (5) years of experience; OR PhD in the same fields with two (2) years of experience. Applicants with a solid technical background in AI/ML or cybersecurity, but not both, are encouraged to apply provided a strong desire to rapidly learn on the job. You are able to: Travel to various locations to support the SEI’s overall mission. This includes within the SEI and CMU community, sponsor sites, conferences, and offsite meetings on occasion (5%). You will be subject to a background check and will need to obtain and maintain a Department of Defense security clearance. Why work here? Join a world-class organization that continues to have a significant impact on software. Work with cutting-edge technologies and dedicated experts to solve tough problems for the government and the nation. Be surrounded by friendly and knowledgeable staff with broad expertise across AI/ML, cybersecurity, software engineering, risk management, and policy creation. Get 8% monthly contribution for your retirement, without having to contribute yourself. Get tuition benefits to CMU and other institutions for you and your dependent children. Enjoy a healthy work/life balance with flexible work arrangements and paid parental and military leave. Get access to university resources including mindfulness programs, childcare and back-up care benefits, a monthly transit benefit on WMATA, free transportation on the Pittsburgh Regional Transit System. Enjoy annual professional development opportunities; attend conferences and training or obtain a certification and get reimbursed for membership in professional societies. Qualify for relocation assistance and so much more. Location Pittsburgh, PA Job Function Software/Applications Development/Engineering Position Type Staff – Regular Full time/Part time Full time Pay Basis Salary More Information: Please visit “Why Carnegie Mellon” to learn more about becoming part of an institution inspiring innovations that change the world. Click here to view a listing of employee benefits Carnegie Mellon University is an Equal Opportunity Employer/Disability/Veteran. Statement of Assurance Always focused on the future, the Software Engineering Institute (SEI) advances software as a strategic advantage for national security. We lead research and direct transition of software engineering, cybersecurity, and artificial intelligence technologies at the intersection of academia, industry, and government. We serve the nation as a federally funded research and development center (FFRDC) sponsored by the U.S. Department of Defense (DoD) and are based at Carnegie Mellon University, a global research university annually rated among the best for its programs in computer science and engineering. Our people apply special knowledge and skills and are part of an elite research university. We perform research and apply our expertise every day to foresee problems and exploit opportunities in software engineering, AI engineering, and cybersecurity. Quality software that is secure will control the future. At CMU SEI, we are engineering that ever-greater software-fueled future. Need Help? For technical assistance, email [email protected] or call 412-268-4600. If you are an individual with a disability and you require assistance with the job application process, please contact Equal Opportunity Services by emailing [email protected] or calling 412-268-3930.
Bing
AI Security Researcher
Are you a cybersecurity and/or AI researcher who enjoys a challenge? Are you excited about pioneering new research areas that will impact academia, industry, and national security? If so, we want you for our team, where you’ll collaborate to deliver high-quality results in the emerging area of AI security. The CERT Division of the Software Engineering Institute (SEI) is seeking applicants for the AI Security Researcher role. Originally created in response to one of the first computer viruses -- the Morris worm – in 1988, CERT has remained a leader in cybersecurity research, improving the robustness of software systems, and in responding to sophisticated cybersecurity threats. Ensuring the robustness and security of AI systems is the next big challenge on the horizon, and we are seeking life-long learners in the fields of cybersecurity, AI/ML, or related areas, who are willing to cross-train to address AI Security. The Threat Analysis Directorate, is a group of security experts focused on advancing the state of the art in AI security at a national and global scale. Our tasks include vulnerability discovery and assessments, evaluation of the effectiveness and robustness of AI systems, exploit discovery and reverse engineering, and identifying new areas where security research is needed. We participate in communities of network defenders, software developers and vendors, security researchers, AI practitioners, and policymakers. You'll get a chance to work with elite AI and cybersecurity professionals, university faculty, and government representatives to build new methodologies and technologies that will influence national AI security strategy for decades to come. You will co-author research proposals, execute studies, and present findings and recommendations to our DoD sponsors, decision makers within government and industry, and at academic conferences. The SEI is a non-profit, federally funded research and development center (FFRDC) at Carnegie Mellon University. What you’ll do: Develop state of the art approaches for analyzing robustness of AI systems. Apply these approaches to understanding vulnerabilities in AI systems and how attackers adapt their tradecraft to exploit those vulnerabilities. Reverse engineer malicious code in support of high-impact customers, design and develop new analysis methods and tools, work to identify and address emerging and complex threats to AI systems, and effectively participate in the broader security community. Study and influence the AI security and vulnerability disclosure ecosystems. Evaluate the effectiveness of tools, techniques and processes developed by industry and the AI security research community. Uncover and shape some of the fundamental assumptions underlying current best practice in AI security. Develop models, tools and data sets that can be used to characterize the threats to, and vulnerabilities in, AI systems, and publish those results. You will also use these results to aid in the testing, evaluation and transition of technologies developed by government-funded research programs. Identify opportunities to apply AI to improve existing cybersecurity research. Who you are: You have a deep interest in AI/ML and cybersecurity with a penchant for intellectual curiosity and a desire to make an impact beyond your organization. You have practical experience with applying cybersecurity knowledge toward vulnerability research, analysis, disclosure, or mitigation. You have experience with advising on a range of security topics based on research and expert opinion. You have familiarity with implementing and applying AI/ML techniques to solving practical problems. You have familiarity with common AI/ML software packages and tools (e.g., Numpy, Pytorch, Tensorflow, ART). You have knowledge or familiarity with reverse engineering tools (e.g. NSA Ghidra, IDA Pro) You have experience with Python, C/C++, or low-level programming. You have experience developing frameworks, methodologies, or assessments to evaluate effectiveness and robustness of technologies. You have superb communication skills (oral and written), particularly regarding technical communications with non-experts. You enjoy mentoring and cross-training others and sharing knowledge within the broader community. You have BS in machine learning, cybersecurity, statistics, or related discipline with eight (8) years of experience; OR MS in the same fields with five (5) years of experience; OR PhD in the same fields with two (2) years of experience. Applicants with a solid technical background in AI/ML or cybersecurity, but not both, are encouraged to apply provided a strong desire to rapidly learn on the job. You are able to: Travel to various locations to support the SEI’s overall mission. This includes within the SEI and CMU community, sponsor sites, conferences, and offsite meetings on occasion (5%). You will be subject to a background check and will need to obtain and maintain a Department of Defense security clearance. Why work here? Join a world-class organization that continues to have a significant impact on software. Work with cutting-edge technologies and dedicated experts to solve tough problems for the government and the nation. Be surrounded by friendly and knowledgeable staff with broad expertise across AI/ML, cybersecurity, software engineering, risk management, and policy creation. Get 8% monthly contribution for your retirement, without having to contribute yourself. Get tuition benefits to CMU and other institutions for you and your dependent children. Enjoy a healthy work/life balance with flexible work arrangements and paid parental and military leave. Get access to university resources including mindfulness programs, childcare and back-up care benefits, a monthly transit benefit on WMATA, free transportation on the Pittsburgh Regional Transit System. Enjoy annual professional development opportunities; attend conferences and training or obtain a certification and get reimbursed for membership in professional societies. Qualify for relocation assistance and so much more. Location Pittsburgh, PA Job Function Software/Applications Development/Engineering Position Type Staff – Regular Full time/Part time Full time Pay Basis Salary More Information: Please visit “Why Carnegie Mellon” to learn more about becoming part of an institution inspiring innovations that change the world. Click here to view a listing of employee benefits Carnegie Mellon University is an Equal Opportunity Employer/Disability/Veteran. Statement of Assurance Always focused on the future, the Software Engineering Institute (SEI) advances software as a strategic advantage for national security. We lead research and direct transition of software engineering, cybersecurity, and artificial intelligence technologies at the intersection of academia, industry, and government. We serve the nation as a federally funded research and development center (FFRDC) sponsored by the U.S. Department of Defense (DoD) and are based at Carnegie Mellon University, a global research university annually rated among the best for its programs in computer science and engineering. Our people apply special knowledge and skills and are part of an elite research university. We perform research and apply our expertise every day to foresee problems and exploit opportunities in software engineering, AI engineering, and cybersecurity. Quality software that is secure will control the future. At CMU SEI, we are engineering that ever-greater software-fueled future. Need Help? For technical assistance, email [email protected] or call 412-268-4600. If you are an individual with a disability and you require assistance with the job application process, please contact Equal Opportunity Services by emailing [email protected] or calling 412-268-3930.
DuckDuckGo
AI Security Researcher
Are you a cybersecurity and/or AI researcher who enjoys a challenge? Are you excited about pioneering new research areas that will impact academia, industry, and national security? If so, we want you for our team, where you’ll collaborate to deliver high-quality results in the emerging area of AI security. The CERT Division of the Software Engineering Institute (SEI) is seeking applicants for the AI Security Researcher role. Originally created in response to one of the first computer viruses -- the Morris worm – in 1988, CERT has remained a leader in cybersecurity research, improving the robustness of software systems, and in responding to sophisticated cybersecurity threats. Ensuring the robustness and security of AI systems is the next big challenge on the horizon, and we are seeking life-long learners in the fields of cybersecurity, AI/ML, or related areas, who are willing to cross-train to address AI Security. The Threat Analysis Directorate, is a group of security experts focused on advancing the state of the art in AI security at a national and global scale. Our tasks include vulnerability discovery and assessments, evaluation of the effectiveness and robustness of AI systems, exploit discovery and reverse engineering, and identifying new areas where security research is needed. We participate in communities of network defenders, software developers and vendors, security researchers, AI practitioners, and policymakers. You'll get a chance to work with elite AI and cybersecurity professionals, university faculty, and government representatives to build new methodologies and technologies that will influence national AI security strategy for decades to come. You will co-author research proposals, execute studies, and present findings and recommendations to our DoD sponsors, decision makers within government and industry, and at academic conferences. The SEI is a non-profit, federally funded research and development center (FFRDC) at Carnegie Mellon University. What you’ll do: Develop state of the art approaches for analyzing robustness of AI systems. Apply these approaches to understanding vulnerabilities in AI systems and how attackers adapt their tradecraft to exploit those vulnerabilities. Reverse engineer malicious code in support of high-impact customers, design and develop new analysis methods and tools, work to identify and address emerging and complex threats to AI systems, and effectively participate in the broader security community. Study and influence the AI security and vulnerability disclosure ecosystems. Evaluate the effectiveness of tools, techniques and processes developed by industry and the AI security research community. Uncover and shape some of the fundamental assumptions underlying current best practice in AI security. Develop models, tools and data sets that can be used to characterize the threats to, and vulnerabilities in, AI systems, and publish those results. You will also use these results to aid in the testing, evaluation and transition of technologies developed by government-funded research programs. Identify opportunities to apply AI to improve existing cybersecurity research. Who you are: You have a deep interest in AI/ML and cybersecurity with a penchant for intellectual curiosity and a desire to make an impact beyond your organization. You have practical experience with applying cybersecurity knowledge toward vulnerability research, analysis, disclosure, or mitigation. You have experience with advising on a range of security topics based on research and expert opinion. You have familiarity with implementing and applying AI/ML techniques to solving practical problems. You have familiarity with common AI/ML software packages and tools (e.g., Numpy, Pytorch, Tensorflow, ART). You have knowledge or familiarity with reverse engineering tools (e.g. NSA Ghidra, IDA Pro) You have experience with Python, C/C++, or low-level programming. You have experience developing frameworks, methodologies, or assessments to evaluate effectiveness and robustness of technologies. You have superb communication skills (oral and written), particularly regarding technical communications with non-experts. You enjoy mentoring and cross-training others and sharing knowledge within the broader community. You have BS in machine learning, cybersecurity, statistics, or related discipline with eight (8) years of experience; OR MS in the same fields with five (5) years of experience; OR PhD in the same fields with two (2) years of experience. Applicants with a solid technical background in AI/ML or cybersecurity, but not both, are encouraged to apply provided a strong desire to rapidly learn on the job. You are able to: Travel to various locations to support the SEI’s overall mission. This includes within the SEI and CMU community, sponsor sites, conferences, and offsite meetings on occasion (5%). You will be subject to a background check and will need to obtain and maintain a Department of Defense security clearance. Why work here? Join a world-class organization that continues to have a significant impact on software. Work with cutting-edge technologies and dedicated experts to solve tough problems for the government and the nation. Be surrounded by friendly and knowledgeable staff with broad expertise across AI/ML, cybersecurity, software engineering, risk management, and policy creation. Get 8% monthly contribution for your retirement, without having to contribute yourself. Get tuition benefits to CMU and other institutions for you and your dependent children. Enjoy a healthy work/life balance with flexible work arrangements and paid parental and military leave. Get access to university resources including mindfulness programs, childcare and back-up care benefits, a monthly transit benefit on WMATA, free transportation on the Pittsburgh Regional Transit System. Enjoy annual professional development opportunities; attend conferences and training or obtain a certification and get reimbursed for membership in professional societies. Qualify for relocation assistance and so much more. Location Pittsburgh, PA Job Function Software/Applications Development/Engineering Position Type Staff – Regular Full time/Part time Full time Pay Basis Salary More Information: Please visit “Why Carnegie Mellon” to learn more about becoming part of an institution inspiring innovations that change the world. Click here to view a listing of employee benefits Carnegie Mellon University is an Equal Opportunity Employer/Disability/Veteran. Statement of Assurance Always focused on the future, the Software Engineering Institute (SEI) advances software as a strategic advantage for national security. We lead research and direct transition of software engineering, cybersecurity, and artificial intelligence technologies at the intersection of academia, industry, and government. We serve the nation as a federally funded research and development center (FFRDC) sponsored by the U.S. Department of Defense (DoD) and are based at Carnegie Mellon University, a global research university annually rated among the best for its programs in computer science and engineering. Our people apply special knowledge and skills and are part of an elite research university. We perform research and apply our expertise every day to foresee problems and exploit opportunities in software engineering, AI engineering, and cybersecurity. Quality software that is secure will control the future. At CMU SEI, we are engineering that ever-greater software-fueled future. Need Help? For technical assistance, email [email protected] or call 412-268-4600. If you are an individual with a disability and you require assistance with the job application process, please contact Equal Opportunity Services by emailing [email protected] or calling 412-268-3930.
General Meta Tags
4- title
- X-UA-Compatiblechrome=1;IE=EDGE
- content-typetext/html; charset=UTF-8
- viewportwidth=device-width, initial-scale=1.0, maximum-scale=2.0
Open Graph Meta Tags
5- titleAI Security Researcher
- descriptionAre you a cybersecurity and/or AI researcher who enjoys a challenge? Are you excited about pioneering new research areas that will impact academia, industry, and national security? If so, we want you for our team, where you’ll collaborate to deliver high-quality results in the emerging area of AI security. The CERT Division of the Software Engineering Institute (SEI) is seeking applicants for the AI Security Researcher role. Originally created in response to one of the first computer viruses -- the Morris worm – in 1988, CERT has remained a leader in cybersecurity research, improving the robustness of software systems, and in responding to sophisticated cybersecurity threats. Ensuring the robustness and security of AI systems is the next big challenge on the horizon, and we are seeking life-long learners in the fields of cybersecurity, AI/ML, or related areas, who are willing to cross-train to address AI Security. The Threat Analysis Directorate, is a group of security experts focused on advancing the state of the art in AI security at a national and global scale. Our tasks include vulnerability discovery and assessments, evaluation of the effectiveness and robustness of AI systems, exploit discovery and reverse engineering, and identifying new areas where security research is needed. We participate in communities of network defenders, software developers and vendors, security researchers, AI practitioners, and policymakers. You'll get a chance to work with elite AI and cybersecurity professionals, university faculty, and government representatives to build new methodologies and technologies that will influence national AI security strategy for decades to come. You will co-author research proposals, execute studies, and present findings and recommendations to our DoD sponsors, decision makers within government and industry, and at academic conferences. The SEI is a non-profit, federally funded research and development center (FFRDC) at Carnegie Mellon University. What you’ll do: Develop state of the art approaches for analyzing robustness of AI systems. Apply these approaches to understanding vulnerabilities in AI systems and how attackers adapt their tradecraft to exploit those vulnerabilities. Reverse engineer malicious code in support of high-impact customers, design and develop new analysis methods and tools, work to identify and address emerging and complex threats to AI systems, and effectively participate in the broader security community. Study and influence the AI security and vulnerability disclosure ecosystems. Evaluate the effectiveness of tools, techniques and processes developed by industry and the AI security research community. Uncover and shape some of the fundamental assumptions underlying current best practice in AI security. Develop models, tools and data sets that can be used to characterize the threats to, and vulnerabilities in, AI systems, and publish those results. You will also use these results to aid in the testing, evaluation and transition of technologies developed by government-funded research programs. Identify opportunities to apply AI to improve existing cybersecurity research. Who you are: You have a deep interest in AI/ML and cybersecurity with a penchant for intellectual curiosity and a desire to make an impact beyond your organization. You have practical experience with applying cybersecurity knowledge toward vulnerability research, analysis, disclosure, or mitigation. You have experience with advising on a range of security topics based on research and expert opinion. You have familiarity with implementing and applying AI/ML techniques to solving practical problems. You have familiarity with common AI/ML software packages and tools (e.g., Numpy, Pytorch, Tensorflow, ART). You have knowledge or familiarity with reverse engineering tools (e.g. NSA Ghidra, IDA Pro) You have experience with Python, C/C++, or low-level programming. You have experience developing frameworks, methodologies, or assessments to evaluate effectiveness and robustness of technologies. You have superb communication skills (oral and written), particularly regarding technical communications with non-experts. You enjoy mentoring and cross-training others and sharing knowledge within the broader community. You have BS in machine learning, cybersecurity, statistics, or related discipline with eight (8) years of experience; OR MS in the same fields with five (5) years of experience; OR PhD in the same fields with two (2) years of experience. Applicants with a solid technical background in AI/ML or cybersecurity, but not both, are encouraged to apply provided a strong desire to rapidly learn on the job. You are able to: Travel to various locations to support the SEI’s overall mission. This includes within the SEI and CMU community, sponsor sites, conferences, and offsite meetings on occasion (5%). You will be subject to a background check and will need to obtain and maintain a Department of Defense security clearance. Why work here? Join a world-class organization that continues to have a significant impact on software. Work with cutting-edge technologies and dedicated experts to solve tough problems for the government and the nation. Be surrounded by friendly and knowledgeable staff with broad expertise across AI/ML, cybersecurity, software engineering, risk management, and policy creation. Get 8% monthly contribution for your retirement, without having to contribute yourself. Get tuition benefits to CMU and other institutions for you and your dependent children. Enjoy a healthy work/life balance with flexible work arrangements and paid parental and military leave. Get access to university resources including mindfulness programs, childcare and back-up care benefits, a monthly transit benefit on WMATA, free transportation on the Pittsburgh Regional Transit System. Enjoy annual professional development opportunities; attend conferences and training or obtain a certification and get reimbursed for membership in professional societies. Qualify for relocation assistance and so much more. Location Pittsburgh, PA Job Function Software/Applications Development/Engineering Position Type Staff – Regular Full time/Part time Full time Pay Basis Salary More Information: Please visit “Why Carnegie Mellon” to learn more about becoming part of an institution inspiring innovations that change the world. Click here to view a listing of employee benefits Carnegie Mellon University is an Equal Opportunity Employer/Disability/Veteran. Statement of Assurance Always focused on the future, the Software Engineering Institute (SEI) advances software as a strategic advantage for national security. We lead research and direct transition of software engineering, cybersecurity, and artificial intelligence technologies at the intersection of academia, industry, and government. We serve the nation as a federally funded research and development center (FFRDC) sponsored by the U.S. Department of Defense (DoD) and are based at Carnegie Mellon University, a global research university annually rated among the best for its programs in computer science and engineering. Our people apply special knowledge and skills and are part of an elite research university. We perform research and apply our expertise every day to foresee problems and exploit opportunities in software engineering, AI engineering, and cybersecurity. Quality software that is secure will control the future. At CMU SEI, we are engineering that ever-greater software-fueled future. Need Help? For technical assistance, email [email protected] or call 412-268-4600. If you are an individual with a disability and you require assistance with the job application process, please contact Equal Opportunity Services by emailing [email protected] or calling 412-268-3930.
- imagehttps://cmu.wd5.myworkdayjobs.com/SEI/assets/logo
- og:typewebsite
- og:urlhttps://cmu.wd5.myworkdayjobs.com/SEI/job/Pittsburgh-PA/AI-Security-Researcher_2020107
Link Tags
1- canonicalhttps://cmu.wd5.myworkdayjobs.com/en-US/SEI/job/AI-Security-Researcher_2020107