What are the responsibilities and job description for the Data Scientist Intern position at Expatiate Communications?
Expatiate Communications is a leader in special education, redefining the way special education programs are designed and delivered. By leveraging data and technology, we focus on improving academic outcomes and providing equitable access to all students. We develop and implement innovative and sustainable special education programs for Education Agencies, charter schools, families, schools, districts, and County Offices of Education.
Description
This unpaid internship is focused on providing educational and professional development aligned with academic and professional goals. It accommodates academic schedules, offers hands-on training, and complements rather than replaces paid employee work. The internship is limited to a maximum of three months and does not guarantee or imply entitlement to a paid job upon completion.
Responsibilities
Description
This unpaid internship is focused on providing educational and professional development aligned with academic and professional goals. It accommodates academic schedules, offers hands-on training, and complements rather than replaces paid employee work. The internship is limited to a maximum of three months and does not guarantee or imply entitlement to a paid job upon completion.
- Job Type: Unpaid Internship (there is no expectation of compensation)
- Duration: Up to Three (3) months. (The Internship is conducted without entitlement to a paid job after the consultation of the internship.)
- Internships are limited to the period in which the internship provides the intern with beneficial learning to the extent to which the internship is tied to the intern’s formal education program.
- Due to the nature of the work and access to educational student data, interns must pass a satisfactory Live Scan background check and provide TB clearance to proceed with the internship. (all associated costs for the Live Scan and TB clearance are the intern's responsibility.)
Responsibilities
- Data Collection & Automation:
- Use Selenium to automate data scraping and web interactions.
- Develop and maintain robust web-scraping pipelines.
- Data Analysis & Processing:
- Clean, preprocess, and analyze large datasets using Python libraries such as pandas, NumPy, and scikit-learn.
- Perform exploratory data analysis (EDA) to extract meaningful insights.
- Model Development:
- Assist in building machine learning models to predict trends and optimize decision-making.
- Test, validate, and tune models for better performance.
- Collaboration:
- Work closely with data scientists, engineers, and product teams to understand data needs and deliver actionable insights.
- Document and present findings effectively to both technical and non-technical stakeholders.
- Currently pursuing or recently completed a degree in Computer Science, Data
- Science, or a related field.
- Proficiency in Python, with experience using libraries like pandas, NumPy, and
- BeautifulSoup.
- Familiarity with Selenium for web automation and data scraping.
- Knowledge of data visualization tools like Matplotlib or Seaborn.
- Basic understanding of machine learning concepts and frameworks.
- Experience with databases (SQL or NoSQL) for storing and retrieving data.
- Knowledge of (AWS, GCP, or Azure) for deploying solutions.
- Knowledge of data visualization tools like Power BI.
- Familiarity with version control systems like Git.
- Strong analytical and problem-solving skills.