Browsing by Author "Cardoso, Filipe"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Design of Data Management Service Platform for Intelligent Electric Vehicle Charging Controller Multi-charger ModelPublication . Baptista, Pedro; Rosado, José; Caldeira, Filipe; Cardoso, FilipeThe electric charging solutions for the residential market imply, in many situations, an increase in the contracted power in order to allow to perform an efficient charging cycle that starts when the charger is connected and ends when the VE battery is fully charged. However, the increase in contracted power is not always the best solution for faster and more efficient charging. With a focus on the residential market, the presented architecture is suitable for single-use and shared connection points, which are becoming common in apartment buildings without a closed garage, allowing for sharing the available electrical connections to the grid. The multi-charger architecture allows using one or several common charging points by applying a mesh network of intelligent chargers orchestrated by a residential gateway. Managing the generated data load involves enabling data flow between several independent data producers and consumers. The data stream ingestion system must be scalable, resilient, and extendable.
- Performance and Scalability of Data Cleaning and Preprocessing Tools: A Benchmark on Large Real-World DatasetsPublication . Martins, Pedro; Cardoso, Filipe; Vaz, Paulo; Silva, José; Abbasi, MaryamData cleaning remains one of the most time-consuming and critical steps in modern data science, directly influencing the reliability and accuracy of downstream analytics. In this paper, we present a comprehensive evaluation of five widely used data cleaning tools—OpenRefine, Dedupe, Great Expectations, TidyData (PyJanitor), and a baseline Pandas pipeline—applied to large-scale, messy datasets spanning three domains (healthcare, finance, and industrial telemetry). We benchmark each tool on dataset sizes ranging from 1 million to 100 million records, measuring execution time, memory usage, error detection accuracy, and scalability under increasing data volumes. Additionally, we assess qualitative aspects such as usability and ease of integration, reflecting realworld adoption concerns. We incorporate recent findings on parallelized data cleaning and highlight how domain-specific anomalies (e.g., negative amounts in finance, sensor corruption in industrial telemetry) can significantly impact tool choice. Our findings reveal that no single solution excels across all metrics; while Dedupe provides robust duplicate detection and Great Expectations offers in-depth rule-based validation, tools like TidyData and baseline Pandas pipelines demonstrate strong scalability and flexibility under chunkbased ingestion. The choice of tool ultimately depends on domain-specific requirements (e.g., approximate matching in finance and strict auditing in healthcare) and the magnitude of available computational resources. By highlighting each framework’s strengths and limitations, this study offers data practitioners clear, evidence-driven guidance for selecting and combining tools to tackle large-scale data cleaning challenges
- A Practical Performance Benchmark of Post-Quantum Cryptography Across Heterogeneous Computing EnvironmentsPublication . Abbasi, Maryam; Cardoso, Filipe; Vaz, Paulo; Silva, José; Martins, PedroThe emergence of large-scale quantum computing presents an imminent threat to contemporary public-key cryptosystems, with quantum algorithms such as Shor’s algorithm capable of efficiently breaking RSA and elliptic curve cryptography (ECC). This vulnerability has catalyzed accelerated standardization efforts for post-quantum cryptography (PQC) by the U.S. National Institute of Standards and Technology (NIST) and global security stakeholders. While theoretical security analysis of these quantum-resistant algorithms has advanced considerably, comprehensive real-world performance benchmarks spanning diverse computing environments—from high-performance cloud infrastructure to severely resource-constrained IoT devices—remain insufficient for informed deployment planning. This paper presents the most extensive cross-platform empirical evaluation to date of NIST selected PQC algorithms, including CRYSTALS-Kyber and NTRU for key encapsulation mechanisms (KEMs), alongside BIKE as a code-based alternative, and CRYSTALS-Di lithium and Falcon for digital signatures. Our systematic benchmarking framework measures computational latency, memory utilization, key sizes, and protocol overhead across multiple security levels (NIST Levels 1, 3, and 5) in three distinct hardware environments and various network conditions. Results demonstrate that contemporary server architectures can implement these algorithms with negligible performance impact (<5% additional latency), making immediate adoption feasible for cloud services. In contrast, resource-constrained devices experience more significant overhead, with computational demands varying by up to 12× between algorithms at equivalent security levels, highlighting the importance of algorithm selection for edge deployments. Beyond standalone algorithm performance, we analyze integration challenges within existing security protocols, revealing that naive implementation of PQC in TLS 1.3 can increase handshake size by up to 7× compared to classical approaches. To address this, we propose and evaluate three optimization strategies that reduce bandwidth requirements by 40–60% without compromising security guarantees. Our investigation further encompasses memory-constrained implementation techniques, side-channel resistance measures, and hybrid classical-quantum approaches for transitional deployments. Based on these comprehensive findings, we present a risk based migration framework and algorithm selection guidelines tailored to specific use cases, including financial transactions, secure firmware updates, vehicle-to-infrastructure communications, and IoT fleet management. This practical roadmap enables organizations to strategically prioritize systems for quantum-resistant upgrades based on data sensitivity, resource constraints, and technical feasibility. Our results conclusively demonstrate that PQC is deployment-ready for most applications, provided that implementations are carefully optimized for the specific performance characteristics and security requirements of target environments. We also identify several remaining research challenges for the community, including further optimization for ultra-constrained devices, standardization of hybrid schemes, and hardware acceleration opportunities.