THD 86 The Lyceum A Living Data Management System for Audio Product Development

foreign [Music] with another episode of THC podcast thanks for checking us out today uh we have a company called lyceum and they're creating a database for companies that are building uh audio Hardware products but perhaps uh anything processing sound and so they're gonna have a database that can capture a lot of test and development uh on on the development of those Hardware products so we're going to find out exactly what they're doing in a moment but without delay let's give some credit to Alti our sponsor and so ulti is an association um called the audio and loudspeaker Technologies International they used to be the American loudspeaker manufacturing association but now they've gone Global and so their mission is to promote and Advance the interest of the loudspeaker and related audio tech industry so they're really kind of some cool guys behind uh networking for audio product development so that's kind of what we're into here so we encourage you to check out Alti so without delay let's say hello so we've got uh Simon in Japan good morning Simon morning yes and uh Joshua Levy uh co-founder and in charge of development at lyceum how are you doing this afternoon Joshua I'm doing well thanks for having me David right and Chris martellotti he's a co-founder as well and he's kind of the product manager taking care of customers and and finding out what the market needs and and giving feedback and developing on that tangent so uh hey Chris how you doing good good to be here all right so yeah like uh I tried to nut things down for people to get them interested to stay tuned to the podcast so did I get do it did I do a good job of capturing what you guys are up to and maybe let's expand on it I think so yeah and uh hopefully we can if anyone else has any more questions uh hopefully we can clarify with you know the presentation we're about to give okay so we jump right into the presentation then all right great well welcome to the lyceum our presentation of the lyceum first of all thank you Dave for giving us the platform and uh access to your audience to share what we've been building over the past few years uh we're very excited uh because we think that the lyceum can be a game-changing software for audio product development we developed the lyceum as a browser-based living database platform with a beautiful user interface and a fast serverless framework the benefits of the development choices uh the benefits of these development choices will provide audio Engineers with a fast and easy and scalable solution to integrate into their r d and Manufacturing processes so kind of picture this that you're a newly hired engineer at a large consumer electronics development team and you're tasked with Baseline performance of a new product in development but you quickly run into some hurdles which you know involve experiment reproduction issues scattered data and collaboration roadblocks but fear not the lyceum is here to transform that that you know that Journey the lyceum isn't just a software it's a force of efficiency and it's an online collaborative uh Masterpiece crafted to be an online lab assistant to Hardware system Engineers so let's we can Bid Farewell to manual data management and embrace the the power of automation okay with the lyceum this I think that to help people out the name of your company lyceum it's based on this concept of remote learning and remote sharing isn't it it's like a Greek word or something for that is that where it comes from well the the lyceum it actually uh it's a Greek word you write it was the name of uh uh Socrates uh old gym name gymnasium and uh Socrates famous disciples the two disciples were were Plato and Aristotle and Plato himself he's he he was very well established and he went on to uh to establishing the first uh I guess University in known in human history and they called it the the academy and uh uh the Aristotle was actually um learning underneath uh Plato at that at the Academy but he didn't like the back and forth uh style of learning that the academy provided it was very just upper class upper of echelon if you had enough money you can get into the academy and it wasn't really like fact based uh experimentation style of learning it was basically just uh going back and forth and theorizing about the reality of the world so uh Aristotle hasn't had his own spin on things it was actually got a lot of steam enough so where Plato didn't like it and uh with his Macedonian descent uh was exiled he went to uh an aisle off the coast of great uh Island somewhere in Greece I forget the exact I think it was Lesbos but I couldn't I could be I forget it's irrelevant but he basically there he uh created a ton of different studies of science uh mainly uh in biology so he basically looked studied the plant life and created biology from it and um he really just he's really known for grandfathering and the uh what we know today as the scientific method where you establish a hypothesis and uh use you know conduct an experiment and you collect data and you generate a result from that from that data so uh uh he went from he he made a name for himself there a long story short made his way back to um Greece where he established the lyceum which was a open source kind of anyone can come in and go come come and go to uh the lectures where they would conduct um you know classes uh in this old rundown gymnasium that was like the real place that uh his his um teacher Socrates used to hang out so that's where the lyceum comes from okay cool oh it's the collaborative and uh yeah in and out okay yeah so yeah so basically with the lyceum we have a lot of ambition and a lot of uh uh uh I guess features we want to add into it based off of those principles that I was talking about with you know the Aristotle and the scientific method but but right now with with the lyceum we uh our first goal is to mainly ingest a bunch of data and and and uh make it available to as many people as possible that uh so they can collaborate on it okay so a little bit about us um first off the um myself Joshua Levy I have a audio systems I'm an audio systems engineer with over a decade experience in in you know consumer electronics Industry I've worked on some projects including the Amazon Alexa the Facebook portal and Oculus Quest Pro uh and I developed the technical uh features that of the uh of the actual lyceum being in the trenches and working with data all the time I'm I'm intimately familiar with the problems that uh Engineers kind of encounter on a day-to-day basis and then create yeah Chris marlotti and I I met Josh uh sort of the end of the last year and started working we sort of hit it off and kind of um I had some experience working in Hardware um and then a lot of my background has been in data um and things of that nature and just kind of saying okay this this this model makes a lot of sense and uh kind of leading go to market product really understanding what people want in the marketplace and then how we can provide that and so um most of my work has been very early on uh sort of helping startups sort of get off the ground and and you know to millions in revenue and so we're very early but hearing great things in the market and so excited to kind of be working with Josh on uh helping launch this mm-hmm right yeah thanks Chris so let's dive into the essence of lyceum um with over three years of development lysine has been shaped by the wisdom and expertise of audio industry owners technology directors and as well as individual contributors at companies and their input has guided every step of our product development and when it comes to audio r d and Manufacturing lyceum is the first of its kind it's been meticulously built from the ground up employee tailor-made to support the unique characteristics of audio Hardware engineering all while maintaining the security and the Privacy that engineering teams require lyceum excels at Hardware data ingestion offering a solution a software solution crafted specifically for a wide range of Hardware engineering data it is able to seamlessly manage measurement data as well as the limits crafted for that measurement data that validate product performance it's missing piece that seamlessly integrates into Engineers workflow another different differentiation of the lyceum is its ability to handle large-scale data processing all via the web browser designed to tackle a vast amount of data swiftly the platform ensures that you'll never be held back by data bottlenecks with the lyceum you'll experience the power of efficiency as it's effortlessly processes your data unlocking the Insight at speed Insight unlocking insights at speeds that leaves traditional approaches in the dust so let's unravel the core of organizational decision making the very Foundation upon which success is built imagine a pyramid a structure that represents the flow of Data Insights and actions within an organization at its base the pyramid holds the key to experimentation the realm where data acquisition systems like clipple audio precision and soundtrack thrive moving up the pyramid we reached the three crucial steps from that form the heart of decision making data ingestion visualizations and collaboration and guess what the lyceum streamlines all three levels the first step is data the lifeblood of informed decision making generated from the data acquisition systems it forms a solid foundation for understanding for understanding and Analysis the lyceum ingests data from these data acquisition systems and transforms it into a clean and code-ready format visually visualizations take us to the next level where graphs statistics and yield live providing a clear concise representation of that data they act as a guiding Beacon Illuminating path forward a path forward for decision making but it doesn't end there insights emerge as a culmination of data and visualizations reports collaboration and documentation unite empowering teams to extract meaningful and meaning and pave the way for intelligent actions insights emerge as a culmination of data and visualizations and at the peak of the pyramidalized action which is the ultimate goal based on the solid foundation of the prior four levels of specific product related decisions take shape driving progress and propelling organizations towards success lyceum stands as a catalyst streamlining this entire process and it's the tool that harmonizes the data the visualizations of that Data Insights and the actions a comprehensive solution for organizational decision making okay so now let's embark on the Journey of data centralization which is a process that unravels a complexity of data collection shaping the very Foundation of informed decision making throughout the product development life cycle Engineers most value the engineer's most valuable time for a company is spent in the pursuit of collecting data to guide decisions and to validate the product performance to ensure manufacturability then once experiment yields result uh once a once experiment yields uh I'm sorry then once experiment yield results they are compared uh against upper and lower limits which are defined by a program performance requirements at the heart of data collection lies the design of experiment which is a strategy a strategic approach employed by Engineers to test hypothesis and gather essential insights into product performance this process takes place in a research laboratory or on a manufacturing line as they strive to uncover meaningful and actionable insights into development Hardware of Hardware products engineer Engineers design experiments ensuring they adhere to rigorous standards a well-designed experiment includes a set of controls and variables that are which are variables that are carefully managed to make the results repeatable and accurate by controlling the variables that might influence the outcome Engineers maintain the reliability and validity of the data that they collect the lyceum allows Engineers to attach descriptions of these control variables to their data allowing for other Engineers to easily replicate results and manage and managers and directors to make decisions backed by the robust evidence whether in the controlled environment of a research laboratory or the dynamic realm of a manufacturing line Engineers exercise their expertise to capture valuable data that shapes their course of their product development and data centralization is the key to harnessing the insights gained through this time consuming process so this empowers organization to unlock the full potential sorry Josh would an example be like if we have a development lab in Boston for a headphone company and then they have a factory in China and maybe the lab has a clipple system and something like this but maybe the factory in China has a sound check on the production line and so they've done this experiment and they they've outlined these tolerances that they need and then that data set will go to the the factory in China and they utilize that or the design of experiment goes there and they there's a way to replicate it with their equipment in China is this kind of yeah the the goal is to to to go both ways uh it's really not one way or another uh most of the scenarios is going from China like they they want to make sure uh or or you can have like a a a a a prototype built in the United States and you uh you wanna Baseline that product uh perform that performance uh so it produces the same over in China yeah then yeah you could definitely replicate that um with with all the details uh being attached to the data from that experiment and send it over to send it put it on the lyceum and then have somebody access that on the lyceum but I'm going to get into the those scenarios in a bit so okay yeah no problem thank you for the question audio uh Engineers are equipped with a diverse range of tools to collect the data that they need once a well-designed experiment has demonstrated repeatability and accuracy Engineers utilize the data acquisition tools to replicate the experiment multiple times with each iteration these tools capture valuable data offering Engineers information to analyze and derive insights from the choice of the data acquisition tool depends on several factors ranging from product application to Resource budgeting and even an engineer's personal preference so at the beginning of the you know doe process the design of experiment process the equipment setup the sequence of events undergo careful iteration uh and it's like a recipe that Engineers strive to create which uh which ensures that every element aligns to produce the best results once the output data is reviewed and finalized the experiment doesn't stop there the experiment enters a new phase where it is run multiple times in a variety of devices and each repetition holds promise of uncovering new insight taking the doe process to another level experiments May unfold in different locations be it the United States or China this is exactly what we were talking about just a second ago despite the geographical divide Engineers strive to ensure that these experiments remain identical as identical as possible and why is the synchronization crucial it's all about validating a device's performance given confidence given confidence in the development teams and in gaining invaluable insights while keeping the pace of the program's development Engineers have their have to coordinate their experiment process across borders they work to most of the time off standard standard hours so like if it's five o'clock here it's nine a.m there and then you know it's nice from a company's perspective because everyone's working all around but teams working on the same thing in different hours is difficult to align um but they they do so and it usually takes weeks in order to uh standardize a specific experiment the goal is to ensure the experiments unfold in parallel however uh sharing a common framework and capturing comparable comparable data points and through this coordination they not only validate the performance of the device but also Foster a spirit of collaboration which collapses that time and space and leverages Collective expertise at the same time there's also an instance where the same engineer needs to replicate an experiment over time you know it could be weeks or months or even longer uh between each iteration and this requires the engineer to recall previous steps uh setup details it uses the same equipment and potentially find the uh and use the previous data and limits the scenario could also involve two different engineers at the same organization but for instance if an engineer working on a product leaves a company and a new engineer needs to backfill their work they would need to find and be able to reproduce all their all their work from the since departed engineering the goal is to really extend the lifespan of data right so if if an actual engineer leaves a company the the data that they've collected at that company typically dies yeah I had that problem this week oh great yeah so now now the most difficult situation really the Holy Grail of test engineering um which is the replication of results Across Time locations and setups uh so in the realm of test engineering a significant challenge emerges and that is to reprodu reproduce the results that transcend the boundaries of Time locations and setups it's a complex puzzle requiring meticulous attention to detail and data management and as as Engineers strive to conquer this challenge a trend begins to emerge which is the realization that centralizing a standard standardized format of test data is the key to success um so centralization of data empowers engineers and their teams to uncover Trends identify patterns and draw meaningful conclusions it brings together disparate pieces of information creating a cohesive and Powerful repository of knowledge data ingestion is an essential step of harnessing the power of the lyceum and unlocking the full potential of your development process the Journey Begins by uploading all your valuable data into the lysine platform every piece of information every data set ready to be harnessed and transformed into actionable insights but it doesn't stop there the lyceum empowers you as an individual engineer as in as an individual engineer to customize how you do how your distinctive data sets are read ensuring that it aligns seamlessly with your unique requirements and application specific functions it's like having a tailor in your closet adjusting every piece of clothing that you want to wear for every occasion but this is for all your data across all your experiments and programs allowing you to extract the most value from your development efforts okay there are however significant challenges of data ingestion a path filled with complexities that demand our attention and Innovative innovative solutions as we embark on this journey we encounter the first hurdle data format which are data format differences different tests different formats is is a puzzling uh it's like a puzzle waiting to be solved demanding flexibility and adaptability when handling different diverse data sources but the challenge doesn't end there security issues Loom large internal data a treasure Trove of insights must be safeguarded and shielded from external teams and vendors the Integrity of confident confidentiality of your information are are of Paramount importance [Music] searchability becomes our next Frontier and data once ingested needs to be easily found again in the future and it must also contain uh be intelligible context and contextually understood enabling efficient retrieval and utilization we also need we also have the need for robust processing applications statistical analysis such as visualization limit generation yield calculation they all come into play and Empower your you to extract meaningful insights mm-hmm so now we get into the tangled web of format issues a challenge faced by many in the realm of data ingestion when it comes to data the variety is the spice of life data from and different data from different companies from different programs and even data from the same team different data from the same teammates within a single program can all be different it's like navigating through a Labyrinth of sheets with varying names and structures spreadsheets from different experiments can have different sheets with different names various categories of data and can contain minute differences that make minute difference is that make it cumbersome to post process and we understand the frustration of spending hours uh cleaning and organizing data and we strive to make sense of all that chaos so here we encounter scenario with repetitive x-axis data from for each measurement uh while it serves its purpose during acquisition it became unnecessary during post-processing and visualization stages you can see in uh in these columns here you have x-axis that are repeating that are identical and if you were to post process a python script on top of this it would it would be a little bit cumbersome tasks to have to remove all of it and additionally the extra uh the data include extra header information such as units and as highlighted in the fourth row right here another example of data format challenges where the measurement data is separated into rows instead of columns this unique scenario presents its own set of complexities that demand our attention in this representation the structure of the data can pose obstacles especially when it becomes when it comes to efficient data handling and Analysis the traditional column based format is often preferred for its ease of interpretation and compatibility with various tools and algorithms but that's not all our example also reveals that the presence of Upper and Lower limits uh information make it difficult for a code parser to handle them separately this adds a layer of complexity when it comes to data interpretation and Analysis here's yet another example of data format complexity complexity a data set that contains a wealth of metadata and in this intricate scenario we encounter additional elements that demand our attention this data format incor uh incorporates valuable information such as pass fail uh indicators tester station IDs appraiser details and time stamps well these metal while these metadata elements provide crucial context and insights these can also represent challenges when it comes to streamlining data processing and here's yet another uh one of the more extreme examples of data uh data file formats that we've come across there are infinite number of ways that these software Suites uh data acquisition systems can export data or produce data uh in in it like it can rain not even not even from uh sound check and and clipple or or Aqua or uh sound check but also from shop floor systems that were made in in China for example okay so we basically set a golden standard format uh which is applied which which is applied which uh basically um standardizes the format and ensures the seamless integration and compatibility within the lysine's back end so in this format the data is structured in columns with a single header row uh at the top of the sheet there's potentially multiple sheets that uh that also have this format and uh and you know one common access column it can be here or you know in any other column but it has to have it has to all be column based by adhering to this golden standard uh we unlock a world of possibilities the lysine empowers us to unleash the full potential where it could be also um easily used and transferable to other platforms like Matlab Python and Excel to to be post processed foreign we take pride in transforming your data unleashing its true potential through our powerful features and capabilities we revolutionize the data management experiment experience one of the ways we accomplish this is by accepting a variety of formats and which we understand comes because we understand data comes in different shapes and sizes and we Embrace and we Embrace this diversity whether it's CSV Excel txt.dat.mat or other commonly used formats the lyceum seamlessly integrates with your data ensuring smooth hassle-free uploads but we don't stop there we need to use it so just a quick question so so basically so the the End customer if they're exporting from Matlab or something lyceum is doing the job of mapping what those those datas are and putting them into your standardized format that's kind of the the ease of use so the the user would select okay this came out of sound check or this came out of this and then you could map it into this standardized format that's like one of the real gems here there's a there's a there's an extra step uh that that is required but we do do the transcribing from uh.mat file to uh to I guess spreadsheet format so then you can go in and and what we call configure the actual uh data data file and we can get into that in the demo sure okay so uh um so in order to deliver all these features uh the back end Orchestra requires a significant amount of investment into its architecture our back end has been tailored over the course of the past three years to focus on three critical components which one is fast performance anywhere in the world via the web browser two database separation allowing users to separate data files into organized groups as well as separate track and track limits and metadata from data files and three is security so that your organization or company's sensitive data is are secured and not accessible only only to uh the users with appropriate clearance right so let's uh let's get into this demo fast so here we have the the front page the home page of the lyceum where you you log in you'll see what groups you're a part of as well as your data set that you can search through um right now we're going to talk about data ingestion and in order to ingest data you have to come into the data ingester Tab and choose your file from your computer to be ingested you give it a name we're going to name this lyceum demo to THD podcast then we're going to give it some descriptors of so we can easily find this in the future uh so we'll give this program name a demo for example we'll give this uh stage example DVT or dvt100 or whatever it is that you want you can actually put your tickets you can create new ones if you want we're just going to put environment like manufacturing line and then if you want to you can add a tag so you can easily find it in the future if you want to leave a little note for yourself so I could like put just Dave there and if I search Dave in the future it'll find this data set all right so this is where you uh you choose the group privileges so anyone in these groups will be have will have access to this data and no one else and here's where you come in and you clean your data so you have the uh you have a workbook here with five different sheets and you've got uh the first sheet is in this golden standard format that we talked about before we have a common access column in the First Column and then you have your devices under test performance in the following five so this is good to go this doesn't need any cleaning uh for your next for the next page uh we have basically all the uh different serial numbers in rows instead of columns and we have these upper and lower limits as a part of the data set so in order to clean this data we have to make sure that these are all in the in columns and not rows so basically we have a button here just transforms everything by hitting the transpose button and now everything is in that goal in standard format um the next page we have uh an output of data where you have a repeating Columns of x x axis data so you can see right here in these highlighted columns that all this all this data is all repeated repetitive and not necessary so I'm just going to highlight those and then delete them as the also the first three columns we don't really need that data set either so I'm just going to delete those as well and now it's in the golden standard format sometimes you'll have data that'll have both one a one-dimensional what we call what AP calls meter data and uh chart data or one dimension we call it one dimensional data versus two-dimensional data uh and you want to basically separate these uh into separate pages so for instance for in this scenario what we'll do is we'll take these this data set and we're just going to move it to the page we have one dimensional data right here as well so that just took that everything and put into this page which already has the same one-dimensional data as well and then we come in here and we do what we did on the on the other repeating uh x-axis column um page so I'll just remove this data delete it and then remove this as well cool and then finally we've got the one-dimensional meter data where we have uh uh just this column that doesn't um mean anything to us we don't really care about this one either for this all intents and purposes or this one or these two which are just titles this is repetitive data [Music] remove that and we have this header file which is not the first uh column in a uh uh first row in a spreadsheet it's the second row so what we do is we have to just switch it with the top header row and that gives it all its appropriate titles and then just remove that um and the last step is that we just have all these empty cells here that we just got to get rid of so we're just going to select that scroll all the way down select shift and click and then delete all that and now we got everything clean we can also rename um you know we name columns if we want that's just an example if that's what somebody wanted to do sure cool all right so uh so now we've got everything like cleaned up and every everyone can kind of see the the benefit of uh data file in this format um in order to post process and and kind of utilize the data effectively on the lyceum we have to actually configure it for the lyceum so what we do is we have to come over to the configure configure Tab and here's where we add either measurements or limits so for this first page we'll just I'm going to add a measurement because this is a measurement and not a limit we're going to select two two dimensional defaults to two-dimensional data because audio data you you typically have X Y you know frequency response data and uh here's where you select what your primary column is hit done and what your units are so it's like hurts for uh x-axis and DB for your y-axis and we're done with configuring this page okay so we go to the next page we can do the same thing except we have these two columns that are upper and lower limits so here's where I'm going to add my measurement I select the first column and then instead of uh having the rest of the page be considered a measurement I'm going to deselect the rest clip here and only select the columns that I want to be part of the measurement okay so that basically removes the upper and lower limits from the actual measurement file measurement uh I guess data set and then we just select our measure our units here too then I can come in and I can select that those measurements those limits so basically I select uh the primary column again and then the upper and lower limit I'll give it a name uh TSD pod limit and then it's uh it's units [Music] great so this page is all good and done and configured uh here I'm just going to configure just like I I normally did on the first page okay that's good to go and then the same thing goes for this page and then for one dimensional data I just select one dimensional data and I select my primary column and my unit and we're good to go once uh once everything's configured you can hit upload adjusting just a curious question because we deal with this in China all the time have you have you looked forward as to avoiding any kind of Google blockages with the great firewall is this this is all based on like a hosting service that everybody can share across the border to China yeah so this is browser browser based um uh we we we have uh um someone that's used it in China and it's it hasn't had an issue at the at the moment we don't use Google uh as a deployment uh structure uh back-end architecture okay good good plan yes assignment about that issue uh maybe we grab a beer after that after this and uh we talk about more stories I would love that um so yeah uh so basically we we see here uh you know this filters out as the most recent um uh data file that it was uploaded but if if it wasn't for example and you wanted to find it again let's just say I'd type in Dave yep uh we it pops up oh I have other ones that oh I was practicing earlier with Dave before so it'll populate everything that has a descriptor Named Dave or if I want to do manufacturing line right cool um yeah cool so the real benefit like you can come in and and see the data uh very quickly but um also you can do other things like if you were to go expand this you could download the files uh in the clean format which is the the format after which we we cleaned everything into columns or if we wanted to download the original file this would be beneficial like let's say if you have a complex uh uh Excel sheet for example with a bunch of formulas in it you don't want to screw up you don't want to lose that information you can basically host your your file on the lyceum um clean it up uh or you don't even have to clean it up you can just upload it and attach descriptors and then download that file again if you want to it's just a good way of librarian your data as well so we really give as much flexibility to the engineers as they want but um if they wanted to uh configure everything so they can post process in the lyceum they could uh I don't know if you guys have used sound check before but it's somewhat like the memory list where you come in you select your data and it'll populate into a what we call the pin board over here let's say I have this measurement and then I also want to pin my limit that we created up into the pin board if I hit post process I can add a actual graph with this data oh I have to change the scale here button extra zero too many and then if I wanted to add the limits I could just load the limits and you can see them right there then you can calculate statistics and yield in the future but we can get to that in a future podcast one of the uh one of the best time saving functions of the lyceum is if you wanted to uh do that for the first let's say uh pdfit check run or the mini build uh um where you have a bunch of data at the beginning of a build and you want to apply the same um I guess cleaning steps to the every subsequent day of a build you can basically choose a file again and give it a new name like day two of production for example and uh you basically use the template file now what the template file is is a uh it's it remembers every single step that we took in order to attach descriptors uh attach security privileges and all the cleaning uh steps we just ran through and it has it all in this one file that you you just use you reference so I just put lysium uh which is so so it remembers that uh project name as the uh template file and basically all this all of the steps that I took all the descriptors I took it just remembers everything and I can just quickly hit upload so I don't have to go through that whole process again and then the next step that we're about to we're really excited about and we're going to release in the next upcoming few weeks is uh multiple file uploads at once so if you have um a ton of different files uh that you have and you want to upload them all at once you basically can use a template file that will append every single data data file to one file itself so you can uh not only uh download it or store it and search for it on the lyceum but also configure it and post process and do use the post processing applications of the lyceum so that's really exciting we're really excited about that very good so yeah um to round it out uh right now we want to just mention that the data cleaning ingestion tool is free as of today um some use cases we've we've mentioned but like uh some some I'm sorry let me rewind some use cases for the data ingestion gesture that might be beneficial are you know you transforming a complex data set so you can you know use Python a python script on it or two you can organize different data files into the same format and then in the future in the future search and download uh of the measurements and limits you know and you can download them in a clean form format to run internal or custom you know applications on top of them okay so then like up next we're just trying to build on top of the data post-processing applications that we've already developed into the lyceum such as the graphing that you saw in basic statistic and yield um that that we have already and going forward we're looking for four beta customers to scale uh a data application specific to their needs of their program so um we just wanted to throw this out there because it will come at a generous discount of the service and include our 24 7 teams support so yeah uh we're excited where this is heading we really see a um a bright uh uh future for it for this so with that you know we don't have to take questions if you don't have any because I assume I describe myself perfectly but Simon what do you think how do you handle uh the uh different uh file formats from various vendors uh you uh people uh should save the data as a text based format as an ASCII format yeah so uh there are various formats like dot dat and Dot Matt that uh I'm not sure what the technical term of what form they come in um but it can be parse we have a parser uh so like okay uh we we can parse it so it comes up like a spreadsheet in the uh in the data cleaner so then you can go in and configure and move things around so it it uh kind of transforms into the data into the format that our backend accepts okay so it's just a question I'll put a file type reading of various formats I'm sorry it's just a question of uh uh reading various file type formats binary so we're limited to the number of uh of I guess uh formats that we've been exposed to yeah but anything's possible so if we have a customer that has a specific like let's say an old uh Melissa file from the 1990s for example we we know we can parse that data and then have it populate um all right anything else Simon I think it's uh I think we're pretty good we're going a little bit long so yeah sorry about that no no that's that's fine that's fine uh no yeah so yeah yeah Josh and Chris thanks for coming on and introducing this technology I I see it like especially like my instance more recently was we had a microcontroller program for an LED effect on a speaker and that engineer left the company and so not directly related because it wasn't data collection but the the the firmware was was lost and so this this kind of issue about managing your data is a is is a massive issue and especially with people leaving and such like that that's so it becomes a nightmare to manage uh whoever takes over but uh so yeah um appreciate appreciate your time today and uh anybody has questions just jot them down below and of course like subscribe and share and all that good stuff and so we'll see everybody on the next episode foreign thank you so much guys okay thank you
2023-06-25 12:42