Credix is a FinTech company dedicated to growing businesses in Latin America. Building on our expertise, we now focus on providing a tailored Buy Now, Pay Later (BNPL) solution for B2B transactions in Brazil with our platform, CrediPay. CrediPay is created to help business grow their sales and improve their cashflow efficiency through seamless and risk-free credit offering. Sellers offer their buyers flexible payment terms at an attractive price point and receive upfront payments. We manage and protect our clients from any credit & fraud risk, letting them focus only on what matters: increased sales and profitability.
Learn more about our team, culture, and vision on our company page.
Why choose Credix?
As a Senior Data Engineer, you will be at the heart of Credix's data strategy, designing and building scalable pipelines and infrastructure that empower teams across the company. Your work will enable the Risk team to enhance predictive modeling, streamline data consumption for other departments, and help drive contextual underwriting and data-driven decision-making. You are passionate about leveraging data to solve complex challenges and revolutionize the B2B credit market in Brazil.
Fluent in English and Portuguese, both written and verbal.
Proficient in building and maintaining ETL/ELT pipelines using tools like Apache Airflow and dbt (dbt experience is required).
Strong understanding of cloud data platforms, particularly Google BigQuery and distributed systems.
Experience with Google Cloud Platform (GCP) and Terraform for infrastructure as code.
Expertise in SQL, Python, and data warehousing best practices.
Familiarity with streaming data technologies such as Apache Kafka or Google Dataflow.
Knowledge of API integrations and data transfer protocols.
Strong analytical mindset, problem-solving abilities, and attention to detail in maintaining data quality and integrity.
Excellent interpersonal and communication skills, with a proactive, team-oriented approach.
High attention to detail, ensuring data quality and integrity.
Ability to mentor and support future team members while excelling as a strong individual contributor.
Introduce daily snapshots of the transactional database to improve performance of historical data lake queries.
Develop a reliable metrics layer to ensure a daily view of KPIs, reducing reliance on ad-hoc queries and tools.
Implement continuous data quality tests to minimize inconsistencies in production data and ensure 90% coverage of data fields.
Migrate analytical tools to the data lake, reducing the load on transactional databases.
Ensure real-time availability of transactional data in BigQuery to support operational and analytical workloads.
Collaborate cross-functionally with the Risk, Operations, and Product teams to align on data priorities and infrastructure needs.
Anticipate business data needs and ensure scalability and reusability of the data lake.