STUDENTS: See disclaimer and note below

General Questions

  • Global adoption of machine learning and Artificial Intelligence (AI) technologies continue to expand and are now leveraged by many of the applications and tools each of us use daily, both personally and professionally. As LMU continues to deploy new tools and services, including those that use machine learning and AI, ITS is constantly working with our vendors to facilitate user privacy and academic integrity, while ensuring that protected data is not used inappropriately.

    ITS will also provide resources and support to members of the LMU community who have an interest in AI. Lastly, ITS will continue to collaborate with campus partners to facilitate, promote, support, and secure LMU's AI related applications, innovations, research, and events.

  • There are several potential pitfalls and risks that institutions like Loyola Marymount University (LMU) should be aware of and we should be actively looking to address these challenges:

    Lack of AI Transparency and Explainability:

    • AI models, especially deep learning ones, can be complex and difficult to understand.
    • Lack of transparency in AI systems can lead to uncertainty about how and why AI arrives at specific conclusions.

    Algorithmic Bias Caused by Bad Data:

    • AI algorithms learn from historical data, which may contain biases and if the training data is biased, the AI model can perpetuate those biases, leading to unfair decisions.

    Privacy Violations:

    • AI systems often process personal data, raising privacy concerns. Improper handling of sensitive information can result in privacy violations.

    Socioeconomic Inequality:

    • Unequal access to AI technology can exacerbate existing disparities. Ensuring equitable AI adoption is crucial to prevent further inequality.

    Social Manipulation and Deepfaking:

    • AI can be misused for disinformation, propaganda, or creating realistic fake content.
    • Safeguarding against malicious intent is essential.

    Despite these risks, AI is a technology that is not going away and therefore, we need to work to make sure that these challenges are addressed head-on and use AI when it can enhance learning, creativity, and accessibility if used responsibly. LMU’s commitment to its mission of education, faith, and justice should guide its approach to AI adoption, ensuring that the benefits outweigh the risks.

  • Refer to the AI Tools Availability webpage for tools supported by ITS and tools provided third-party companies.

  • Please visit the FERPA - Rights and Privacy Act webpage on LMU's Office of the Registrar website for detailed information.

  • Please visit the AI Data Security and Privacy webpage for detailed information.

  • Within any commercial AI, please note that your data may be used to train the AI model, so do not put information that you do not want used in other people’s searches or generative AI. Your personal and company data is protected within Microsoft Copilot when you login with your LMU account. Never use any PII, confidential, or FERPA data in any AI tool.

  • Please follow the standard ITS Technology Purchasing procedures for any technology needs. For faculty needs, please reach out to the ITS Instructional Technology team. For larger capital project, please go through the UTC Project requests.

  • For staff only - Register for one of the First Thursday Workshop series with AI as the main theme for summer and fall 2024 semesters. For dates, times, and registration, visit ITS Technology Training webpage.

    For faculty only - Starting in the fall 2024 semester, First Friday Workshop series will include AI topics. ITS is in partnership with Faculty Development Office to finalize the schedule and the workshop topics.

    Login to LMU's LinkedIn Learning for many learning at-your-own-pace courses. Learn more about LinkedIn Learning and login instructions, click here. After you login to LinkedIn Learning, click Explore Artificial Intelligence learning path with over 460 AI related courses, with more titles released regularly.

  • Prompt engineering is the process of writing, refining, and optimizing inputs, or “prompts,” to encourage generative artificial intelligence (AI) systems to create specific, high-quality outputs (source: IBM Prompt Engineering).

    To learn more about prompt engineering:

    Login to LMU's LinkedIn Learning for many learning at-your-own-pace courses. Learn more about LinkedIn Learning and login instructions, click here. After you login to LinkedIn Learning, click here for a list of courses related to prompt engineering.

Generative AI Text Tools

  • ChatGPT and Microsoft Copilot serve distinct purposes. ChatGPT, developed by OpenAI, engages in natural language conversation and can perform various tasks like composing essays, writing code, and creating poems. In contrast, Microsoft Copilot is an AI digital assistant that integrates with programming environments and assists with coding tasks. While ChatGPT is versatile, Copilot is tailored to organizational contexts.

    Both ChatGPT and Copilot can do the following:

    Generate human-like text responses by processing user input and providing relevant answers.

    Excel at composing essays, writing code, creating poems, and more.

    Understand natural language queries and context.

    Help users draft content, improve grammar, and structure sentences.

    Offer writing assistance, whether it’s an email, cover letter, or code comment.

    Summarize lengthy text or articles.

    Extract key information and present concise summaries.

    At LMU, we do not currently have enterprise or pro licenses for OpenAI’s ChatGPT. Regular ChatGPT licenses are available at no cost, but these licenses should NOT be used with any personal or organizational data.

    LMU does provide campus-wide licenses to Microsoft Copilot and when logged in with your LMU account, your data is protected from being used in training the AI models. However, never use any personally identifying information (PII) in any AI system.

    To safeguard university and personal data and to protect everyone's privacy, read more on the AI Data Security and Privacy.

  • No, you do not have to use any AI system. However, if you are looking to use a Generative AI Text system, ITS recommends that you use the protected Microsoft Copilot (using ChatGPT 4 technology).

  • Your personal and university data is protected and will not be used to train any outside AI models when you login to Copilot with your LMU username and password. For more information, visit the LMU Copilot website.

AI in the Classroom

  • The university does have Turnitin, which has several key elements for assessing the originality of student work.

    Turnitin offers three sets of tools to uphold academic integrity and empower students to do their best, original work.

    • Similarity report scans student work for plagiarism, highlighting areas that match web-based sources.
    • AI Detector uses generative AI to produce a percentage of the document that AI writing tools, such as ChatGPT, may have generated. For FAQs related to the accuracy of Turnitin’s AI Detection, click here.
    • Feedback Studio includes a full suite of tools for providing in-text and overall feedback on student submissions.

    For more information about LMU Turnitin and tutorials, click here.

Disclaimer: This website does not endorse the use of AI in teaching and learning at LMU. Faculty decide their policies related to technology in their courses and reserve the right to consider the unauthorized use of AI to be a breach of the university’s Academic Honesty Policy. Students should always consult their faculty for the specific course policies related to the use of AI. Sanctions for violations of the Academic Honesty Policy may include failure for the assignment, the course, academic probation, suspension, or dismissal from the university. For more information, visit LMU's Academic Honesty website.

NOTE: This webpage has not been reviewed by any standing university committees for alignment with university policy, academic freedom, or the faculty role in shared governance. It will undergo a comprehensive review in Fall 2024.