Support software developers with database operations including the development of complex.
SQL, tuning of DML, and the creation of stored procedures.
Effectively coordinate and communicate with all the stakeholders (internal teams and stakeholders). Advise developers on the most efficient database designs. Set up and maintain database infrastructure.
Learn relevant business processes and understand the data flow, criticality, and dependencies.
Troubleshoot SQL Server service outages as they occur, define SLA, and ensure support when required.
Genera tetraces, and execution plans, identify performance issues, deadlocks/contention, and resolve them.
Enhancements and improvements to Existing Database Systems Infrastructure.
Determine the most effective way to increase performance including hardware recommendations, server configuration changes, performance tuning, etc.
Setup and test High-Availability as part Disaster Recovery Strategy for the Databases to ensure the ability to meet the business’s Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO).
Execute data migrations to/from various platforms/engines and database upgrades.
Maintain data integrity and security (manage roles and permissions of database users).
Manage production, staging, and development database environments.
Proactive housekeeping of databases.
Configure SQL Server monitoring utilities to minimize false alarms.
Create detailed documentation including diagrams of database infrastructure.
Administer AWSRDS for MySQL, PostgreSQL, and SQL Server.
Implement master-slave, active-active, and backup strategies.
Optimize performance and conduct query analysis.
Cloud and Tools:
Use AWS services: Redshift, S3, RDS.
Implement SQLServer clustering and high availability (HA) technologies.
Utilize SQL Server management tools like Redgate.
What we’re looking for:
Working knowledge in MySQL andIn-Memory Databases (Redis, MongoDB, etc)
Extensive experience writing T-SQL and stored procedures and query tuning on high transaction systems.
Experienced in SQL Server Integration Services (SSIS)
Experienced in ETL (Extract-Transform-Load) development / data integration.
Experience in creating data hooks for analytics tools like– PowerBi, Tableau, Insights.
Sound knowledge of RDBMS concepts, database design and SQL/T-SQL.
Experience in SQL Server Clustering and HA technologies including mirroring, logshipping, failover cluster and various replication technologies would be an advantage.
Have knowledge of SQL Server management tools like Redgate etc.
Knowledge of various database platforms (SaaS, PaaS, IaaS, etc.)
Good to have experience on Mulesoft.
Postgresql and sound knowledge of oracle.
Has an innovative approach to work, constantly looking to upgrade the system to more efficient and effective new technologies.
Mono type is expanding globally. Proficiency in one or more of the following languages is desirable (not mandatory) for this role: German, Japanese, French, Spanish.
Strong business sense and sense of urgency to achieve business results.
Education + Experience:
4-6 years of working experience in Advanced administration of databases like MS SQL server, MySQL (incl. Amazon Aurora).
Apply Instruction:
Interested candidates fulfilling the mentioned criteria are encouraged to Apply using the Easy Apply Button below. Registered candidates may also apply using the Apply Now Button.
The Knowledge Graph Engineer will design, develop, and maintain advanced knowledge graph solutions, transforming complex datasets into meani... Read More
The Knowledge Graph Engineer will design, develop, and maintain advanced knowledge graph solutions, transforming complex datasets into meaningful, interconnected insights. Leveraging tools like Neo4j, RDF, and ontology modeling, the engineer will play a crucial role in developing Yirifi's risk and compliance knowledge base. Success will be measured by the ability to create scalable graph architectures, ensure data quality, and provide actionable insights to the business.
Responsibilities and Deliverables
Knowledge Graph Development
Build and maintain knowledge graphs using tools like Neo4j, ensuring robust and scalable graph architectures.
Implement data models that accurately represent relationships, hierarchies, and ontologies relevant to risk and compliance.
Ontology and Semantic Modeling
Design and implement ontologies and taxonomies to enhance data discovery and usability.
Develop and manage RDF schemas and OWL ontologies to support semantic data integration and reasoning.
Data Integration and Transformation
Integrate structured and unstructured data from diverse sources into the knowledge graph.
Design ETL pipelines to map and ingest data into graph databases, ensuring accuracy and consistency.
Query Optimization and Analytics
Develop and optimize complex Cypher, SPARQL, and GraphQL queries for data retrieval and analytics.
Enable real-time insights by implementing efficient query performance strategies.
Data Quality and Governance
Implement validation rules and mechanisms to ensure the integrity and accuracy of the knowledge graph.
Establish metadata standards and governance practices for the graph database.
Collaboration and Stakeholder Engagement
Partner with data scientists, product teams, and compliance experts to align graph solutions with business objectives.
Translate technical concepts into actionable insights for stakeholders.
Emerging Technologies and Best Practices
Stay abreast of advancements in knowledge graph technologies, ontology modeling, and semantic web standards.
Pilot and implement innovative tools to enhance the knowledge graph ecosystem.
Key Performance Indicators (KPIs)
Graph Completeness: Achieve 95% representation of key entities and relationships in the knowledge graph.
Query Performance: Ensure query response times are under 2 seconds for 90% of queries.
Data Quality: Maintain 99% accuracy in entity relationships and ontology alignment.
Stakeholder Usability: Ensure the knowledge graph meets defined use cases for analytics, machine learning, or compliance reporting.
Metadata Coverage: Ensure 100% of graph nodes and edges have associated metadata.
Required Knowledge, Skills, and Abilities:
Graph Databases:
Expertise in Neo4j, including Cypher query language and graph data modeling.
Familiarity with other graph databases (e.g., TigerGraph, Amazon Neptune) is a plus.
Ontology and Semantic Web:
Strong understanding of RDF, OWL, and SPARQL.
Experience in designing and managing ontologies and taxonomies.
Data Integration:
Hands-on experience with ETL tools and frameworks for graph data pipelines.
Ability to process and transform unstructured data into graph-compatible formats.
Programming:
Proficiency in Python or Java for scripting and integrating with graph databases.
Experience with RESTful APIs for accessing and managing graph data.
Data Quality and Governance:
Familiarity with tools and frameworks for data validation and lineage tracking.
Understanding of data governance best practices for graph databases.
Soft Skills
Strong problem-solving and analytical thinking abilities.
Excellent communication skills for explaining technical solutions to non-technical stakeholders.
Ability to work collaboratively in a dynamic, fast-paced environment.
Attention to detail and a commitment to delivering high-quality results.
Education + Experience:
3+ years of expertise in Neo4j, including Cypher query language and graph data modeling.
Job Benefits:
Be part of a team that’s redefining risk and compliance in the crypto industry. As the Knowledge Graph Engineer, you’ll lead the charge in building a world-class knowledge graph solution that powers groundbreaking insights. Join us and make a meaningful impact on the future of digital assets!
Apply Instruction:
Interested candidates fulfilling the mentioned criteria are encouraged to Apply using the Easy Apply Button below. Registered candidates may also apply using the Apply Now Button.
The Data Curator (Information Manager) will be instrumental in organizing and maintaining the data that powers Yirifi’s risk and compliance platform. This role focuses on ensuring that curated datasets are accurate, relevant, and aligned with the needs of business users, including clients and internal teams. The ideal candidate has strong organizational skills, attention to detail, and an understanding of how structured data can drive better decision-making. Success will be measured by the usability, quality, and accessibility of curated datasets and their contribution to business outcomes.
Responsibilities and Deliverables
Data Organization and Management
Manage and organize curated datasets to ensure they are accurate, up-to-date, and easy to access.
Create simple, user-friendly systems for categorizing and tagging data to enhance usability.
Quality Control and Validation
Review and validate data for accuracy, consistency, and completeness.
Work with internal teams to address data gaps and ensure business needs are met.
Metadata and Categorization
Define and maintain categories and tags for datasets to help stakeholders quickly find and use the information they need.
Ensure all data assets are accompanied by clear and understandable descriptions.
Stakeholder Collaboration
Work closely with business, product, and operations teams to understand their data needs and deliver curated solutions.
Collaborate with external data providers to ensure the quality and relevance of incoming data streams.
Reporting and Insights
Develop simple reports and summaries to communicate data quality, coverage, and usability to business stakeholders.
Highlight insights from curated datasets that can drive strategic decisions.
Process Improvement
Identify opportunities to improve how data is organized and maintained, making it easier for teams to access and use.
Propose new ways to categorize or present data to better support evolving business needs.
Ad Hoc Support
Assist with other tasks related to data management and analysis as required by the CEO or leadership team.
Key Performance Indicators (KPIs)
Data Quality: Maintain 99% accuracy across curated datasets.
Data Accessibility: Ensure curated datasets are accessible and understandable for all stakeholders.
Metadata Completeness: Ensure 100% of datasets are categorized and tagged appropriately.
Stakeholder Satisfaction: Positive feedback from internal teams and clients on data usability.
Process Efficiency: Reduce time spent searching for or organizing data by 20%.
Required Knowledge, Skills, and Abilities:
Business Collaboration:
Ability to understand business requirements and translate them into actionable data organization strategies.
Communication Skills:
Strong written and verbal communication skills to create clear documentation and reports for business stakeholders.
Attention to Detail:
Demonstrated ability to ensure accuracy and consistency in data sets.
Familiarity with Data Tools (a Plus):
While not required, experience with basic tools like Excel, Google Sheets, or data visualization platforms (e.g., Tableau) is a bonus.
Soft Skills
Highly organized with excellent time management skills.
Strong problem-solving skills and a proactive approach to addressing challenges.
Ability to work collaboratively across teams in a dynamic, fast-paced startup environment.
Eager to learn and adapt to new systems and processes.
Education + Experience:
3+ years of experience in managing and organizing data for business use, with a focus on quality and accessibility.
Job Benefits:
As a Data Curator at Yirifi, you’ll play a crucial role in ensuring the quality and usability of the data that drives our mission to simplify crypto risk and compliance. This is an opportunity to work closely with a passionate team, contribute to impactful projects, and grow your skills in an exciting and fast-evolving industry. Join us and make a difference!
Apply Instruction:
Interested candidates fulfilling the mentioned criteria are encouraged to Apply using the Easy Apply Button below. Registered candidates may also apply using Apply Now Button.