Перейти к публикации

Стратегия развития продуктов PTC


Рекомендованные сообщения

В данной конференции достаточно много пользователей Pro/E, поэтому я отважился внести на обсуждение данный текст. Просьба меня не ругать за отсутствие фильтрации контента — в смысле "выпаривания воды". Но каждый, в силу интересов, сможет найти здесь информацию для себя. Извиняюсь также, что не перевел, времени нет. У кого есть время и заинтересованность в этом вопросе — милости просим, в смысле переведите, пожалуйста. Полагаю, что тут есть и продвинутые спецы (читающие подлинники) а также и те, кто сможет каким-нибудь PROMT'ом, перевести нижеприведенный текст.

A users perspective on the Orlando, Florida PTC/User show --- June 8, 2003

Email newsletter

by Peter Nurkse, Sun Microsystems

Contents

- PTC direction

- Product Development System

- Core Modeling

- Three Ages of the Pro/E GUI

----------------------------------------------------------------------

- PTC direction

PTC seems to be embarking on a big fundamental change. Customers are usually concerned when a key supplier, like PTC, changes course. But we should all know that in these days and times change is inevitable.

The big change isn't a product, or a new product. PTC has done that before, several times even. PTC began with one product, Pro/E, and then developed separate modules, and then different products. And then with the acquisition of CV in 1997 PTC acquired Windchill and the basis for a whole new family of data management products.

So, products, products, products, that could be the PTC story to date.

But now the company is planning to develop and deliver systems. Our familiar product names remain, of course, Pro/E and Intralink and the rest, but they will be developed and tested and delivered together as a system. And you can still buy and use any individual product or version you want, but PTC's focus has changed, from products to systems.

That is probably a more fundamental change than just adding products or mixing products, which is what PTC has largely done to date. So this could be the biggest change in the company's history, a change that will work itself out over years, this is just the beginning.

With any big plan like this, it's probably important to recognize the limitations of the plan. Design and data management will probably always be fundamentally different disciplines, appealing even to different people, different personalities. Design thrives on variation and spontaneity, while data management needs to lock things down.

Within PTC itself customers still see that difference. Ask the same question from someone from the Pro/E side of PTC and from someone from the Windchill side of PTC, and you can get two very different answers. And that may continue, because design people and data management people see things differently, even looking at the same thing.

The big plan, the new direction, to make and deliver systems, should over time solve any number of problems getting PTC products to work, and to work together. And it's a major competitive advantage for PTC, because nobody else seems to have such a comprehensive solution without using different packages which were never even designed to work together.

But design and data management are different enough that they will probably always remain creative opposites, inspiring and driving the system development. One system that combines both design and data mgmt. will have to recognize and respect the differences between them, to be effective. It'll be interesting times for PTC, balancing design and data mgmt. within a single system, and not imposing a data mgmt. solution on design or a design solution on data management.

----------------------------------------------------------------------

- Product Development System

In the published program this talk by Jim Heppelman, PTC Chief Product Officer, was originally titled "Five Reasons NOT to Adopt Pro/E Wildfire!". Could be the title was just chosen to ensure a packed room, because Jim immediately abandoned it, saying that the subject has already been "beaten to death" on the PTC/USER email.

Dick Harrison couldn't make it to the meeting, feeling a bit ill at the airport in Boston before getting on the plane this morning, so Jim delivered that presentation in the afternoon too. That was more on the overall business, revenue and profits, but parts of it are included here.

Probably some people were disappointed not to delve into Wildfire details. But Jim did have bigger fish to fry, namely, Product Development Systems. And PTC is pretty confident about Wildfire overall direction (see the next story, on Core Modeling).

PTC representatives are often quite candid about quality. And Jim said that he thought Wildfire had the best quality of any Pro/E release in the last 5 years. Which would be about when PTC got Windchill, and Dick Harrison has told us before at these conferences how Windchill did distract PTC for some couple of years from Pro/E and Pro/E quality.

Jim did also mention, what probably most people know, that Wildfire did have more user involvement before release than any other version of Pro/E. However we can see that did set expectations, people did look to see their contributions included in the final product. Perhaps some way is needed so advance user testers can see what happened to their comments on the product. If things turned out differently that what you wanted, at least you'd probably like to know why.

Jim said that "PTC used to focus 100% on engineering", which was probably true, it was an engineering company.

But engineers themselves know that a company that's 100% focused on engineering isn't a good company to do business with. You want more skills, marketing to help interpret requirements, services for support, even a helpful sales rep. So PTC has pulled back from that close focus on software engineering within PTC to get more involved in more areas.

Taking that kind of overall view to PTC and competing products, PTC found that product databases today are composed of tools and databases and "weak connections" between them. How things work together becomes the key question, not this or that feature here or there.

There's the system business that PTC is aiming for, "how things work together". There PTC see themselves different from a traditional Systems Integrator, like IBM or EDS. PTC would be a System Provider, delivering a complete system, but not building every system out of bits and pieces, like a traditional Systems Integrator. That seems a new phrase, System Provider, but that's good for PTC, it emphasizes PTC's strengths, having a relatively simple set of products to cover design and data mgmt. Alternatives from other companies require 2 or 3 or 6 or 7 different packages to get coverage in just one area (7 packages for EDS data mgmt.).

In case you haven't already noticed, Jim said that up to now PTC products have been developed in parallel, by entirely independent product teams, each team concerned to get the very best results for their products, and not necessarily for the other products.

To develop systems which integrate different products better, PTC will now include a System Planning stage at the beginning of each product cycle, so that all the products get some coordinated overall direction and input. And also a System Testing stage at the end of the cycle, where the products are tested together. That's new, up to now PTC product testing has been limited to checking the interfaces with other products, for each product, and has not included running everything together.

Already we can probably look forward a bit, and expect that PTC won't keep the product teams so separate, but may mix and match them, so then we might have a data mgmt. guy and a CAD guy working together on some common feature. Many combinations possible in the future, lots of potential.

For testing, PTC is going to develop and use a set of Certified Applied Practices, to keep the testing fixed on real world needs. Examples are Bottom-up Design, Top-down Design, Search and Re-use, Release to Mfg. Each a scenario of steps and stages involving different tools, both design and data mgmt.

So what do these Product Development Systems look like, and when are they available? Here's a schedule, but Jim emphasized it all depends on John Vreeland, who will be in charge of the testing. If John says it isn't ready, it doesn't ship on the scheduled date. There's an interview with John in the Fall 2002 issue of Profiles magazine, available on-line at <noindex>http://www.profilesmagazine.com/p21/interview.html</noindex>. Here's the initial

schedule:

PDS 1.0 6/2003 Pro/E 2001 2003010, Intralink 3.2 PDMlink 6.2.6 DSU4

PDS 2.0 7/2003 (Pro/E same) Intralink 3.3 PDMLink 6.2.6 DSU5

PDS 3.0 8/2003 Wildfire (Intralink same) (PDMLink same)

Nothing major so far, in terms of different products. But these first 3 system releases are priming the pump, getting ready for more. And the major point really is the mention of a system being delivered: that is major, up to now PTC has only delivered products.

You can see in these first 3 system releases a general PDS rule, and probably a good rule with any large system: change only one major component at a time, only one. That's a very different approach from the traditional product approach, which emphasizes releasing each product as often as possible.

The next 3 releases are:

PDS 4.0 11/2003 (Wildfire same) (Intralink same) Windchill 7.0

PDS 5.0 4/2004 Pro/E next release (Intralink same) (Windchill same)

PDS 6.0 11/2004 (Pro/E same) (Intralink same) Windchill next release

So one big advantage here for customers is that you shouldn't have to face new versions of everything descending down on you all at the same time, just because every product team wanted to get out a new version at the same time. Now releases become system releases, and each product takes its turn, they wait for each other.

Jim emphasized products are still available a la carte. But he said you'd be best off to stay with the roadmap, or at least in sight of the roadmap. He gave due credit to us customers, saying that in the past a major difference for us was that we were doing all this system testing ourselves, running different products together. He also said in the afternoon that Intralink was going to be here "as far as I can see", and you can see that in the PDS major components: Intralink is a major component, equal with Pro/E and with Windchill in that if it changes that's a separate PDS release.

As a former Computervision employee himself, Jim may have been too modest to point out that PTC would not be able to deliver systems today if they had not bought up CV 6 years ago. Dick Harrison described at a past conference how PTC went about their due dilligence investigation of CV in Dec. 1977, and found "a present wrapped under the tree", namely, Windchill. Which CV was running as a skunk works in Minneapolis (hence the name), and which PTC didn't know about when they made the initial offer for CV.

Windchill became PTC's entrance into enterprise data management, and combined with CAD that is PTC's standing to be a system vendor now. Quite a Christmas present. PTC already releases CADDS5 in conjunction with Optegra, the CV data mgmt. product which goes back to 1983. Plus CV gave an example of a CAD company which became a data mgmt. company, and many CV people are within PTC today with that experience and background. Seems Computervision has made a bigger contribution to PTC than any other company, and that contribution continues as PTC develops the new role of a System Provider.

----------------------------------------------------------------------

- Core Modeling

Netesh Gohil covered this subject for PTC. On the subject of the topic of the day, Wildfire GUI, he was confident in the overall interface, although he said there was work to do too (see my essay on Ages of the Pro/E GUI following). He described the old menu manager approach as "hierarchical", and the new approach as not hierarchical. For example, with the old menu you had often to drill down a long way before you could start to do something. Now you can usually start right away (like, a hole), and see something, then modify it as you like. That's a key improvement to avoid huge dialog boxes filling up half the screen: you make choices as you need to, you don't have to face all the choices at once.

Netesh was positive all functionality is available, using the right mouse button and hot objects, or else collectors like Extrude Surfs and the dashboard. The goal for the dashboard is to show you the choices most people need for 80% of their work, and to give you the alternatives for the other 20%.

PTC found out that of spin/zoom/pan, spin was the most common function. So that became the middle mouse button by itself. Then zoom and pan are secondary. Netesh described the usual zoom in/out/in/out sequence. Which to any old Computervision user doesn't sound as efficient as the CV Drawing Window. That secondary smaller window let you zoom instantly throughout the visible model, in one area and then another, without ever doing a zoom out.

Netesh has a list of the 76 features in Pro/E that were developed over the years, with many different selection modes, surfaces and boundaries and so on. Now there's one selection method, and it doesn't depend on what mode you're in.

The dashboard can pause (becomes faint), which is where you can create construction geometry as needed.

Netesh thinks that making datums is largely a workaround for the pattern tool not handling rotations, that's why rotating and patterning is awkward. He wants to emphasize fixing the pattern tool problem, so that you just pick what you want to rotate, select an axis, and you're ready to go.

Insert menu always gives you a choice between object>action, or

action>object. But when you edit an object, it's object>action only,

because actions may depend on which object you select. For example, there are different copies for curves and for surfaces and for features (and the feature copy isn't in Wildfire yet, still the menu mgr.).

Patterns can be defined within any boundary you define on a plane, like perf hole patterns on a sheetmetal part. You can remove any individual hole instances in the pattern by digitizing the hole (it turns white), which is easier than editing holes in a table. The pattern can always be projected to a surface, but defining the boundary itself on a variable surface is a future.

View Manager handles simplified reps and explode states and display states and orientations in the one tool. And All States lets you create any combination of them. The goal here is "drawings as models", show any desired state in the model, without requiring a drawing. Model sections are due to be moved into the View Manager next.

Netesh gave an example of Wildfire efficiency in creating a feature: in the past, if you got an error, feature didn't intersect part, you'd have to cancel, then often create a surface (usually can always do that), then look for intersections, then create solid. With Wildfire, you can stop and check and convert between surfaces and solid, all within one feature command.

Rounds are just rounds: not simple or advanced, not a new set or a round set, but just rounds. You just start, and go to where you want (again avoiding a huge dialog box). There is a new round guaranteed to be machineable with a ball end cutter: before, Pro/E took some liberties using patches to define blends between rounds, which couldn't be machined with a ball end cutter.

Yellow is fixed now as the focus color, whatever you're looking at right now. Other entities visible will be shown fainter, always assuming a blue background.

About sketching first, before creating geometry: Netesh said that Pro/E requires everything (like a sketch) to be a dependent copy, or an independent copy. If Pro/E uses a dependent copy of that initial sketch, then it doesn't carry dimensions with it, just the outline. But if Pro/E uses an independent copy of the sketch, then there isn't a link. There is a solution to allow linking to the same section as needed at different points, but that wasn't ready for the current release.

----------------------------------------------------------------------

- Three Ages of the Pro/E GUI

The confidence PTC people show about the Wildfire GUI has a solid foundation, a User Model. You don't usually hear of User Models when you buy a product, but any user interface (not just software) depends on a User Model, even if it's just in the mind of the designer. And PTC put man years of work into their User Model, describing generally how people react with a CAD system to create and edit 3D geometry (a new area, not well defined). Someday nobody will buy any product that uses software (like, a

cellphone) without asking to see the User Model, but that is a ways off.

Seems there are 3 ages in the history of the Pro/E GUI:

- The Golden Age: from Rev. 1 to Rev. 19, the GUI was driven simply by the underlying code, the programmers wrote the GUI. Every time you hit a Done in the menu manager, it usually matched exiting a subroutine in the code itself. That was an amazing time, that for so long a period users and programmers could be in such harmony, and users could happily trace the structure of the code in their menu choices.

- The Lipstick Age: this name comes from Jim Heppelman, and he used "lipstick" to describe the GUI development starting with R20. The whole emphasis there was to "slap" a Windows-like interface on top of the old menu manager, even though nobody ever suggested Microsoft knew anything about 3D CAD. The result wasn't a positive benefit, it was just cosmetic, just lipstick, Jim said.

Both of these first two GUI ages at PTC, the Golden age and the Lipstick age, did not have a User Model. But the sheer problems of trying to develop a real graphical user interface for Pro/E forced the development of a User Model, and that resulted in:

- The Wildfire Age (no other name needed). And that's where we are now. This may be more centered on the graphical display itself than any other user interface around. And it has that User Model to guide it, which means it probably will look strange to users initially, just because users anywhere have rarely had an interface designed really for them (rather than an interface shared with the programmers, for example, or an interface that is nothing more than Windows compliant, for another example). A much bigger change than the cosmetic changes from R20 on.

Probably everyone involved with Pro/E has to be nostalgic sometimes for that Golden Age. It was so simple, how programmers would write code and then users would follow that code along in the user interface. Perhaps in the history of software applications there has never been such a fine match between programmers and users, sharing the user interface, as on R1 thru R19. But although Windows might have had some share in ending that Golden Age, with the R20+ changes, the real change probably is the development of a User Model to describe working with 3D geometry. Something new there, and the results have to be different.

-----------------------------------------------------------------------------------

A users perspective on the Orlando, Florida PTC/User show --- June 9, 2003

Email newsletter

by Peter Nurkse, Sun Microsystems

brought to your desktop

by PTC/USER and PTC and Sun Microsystems

This newsletter contains Asa Trainer's presentation on data exchange, which happened yesterday. Unfortunately the writing and editorial staff have to sleep sometime.

And I'd like to also correct a detail of fact. Leafing through the presenter biographies today, I noticed that Jim Heppelmann wasn't actually a Computervision employee. He was an employee of Windchill Technologies (which he helped co-found, in 1996), and CV had an interest in that company. When PTC acquired CV in 1998, PTC got that interest too, and the rest is history. Worth clearing up this detail, because it explains why PTC didn't know of Windchill when PTC made an offer for CV in Dec. 1997. Jim and the others at Windchill Technologies weren't on the CV payroll, so not so very visible.

----------------------------------------------------------------------

Contents

- Data Exchange and Archiving

- Customize for Business Success

- Concurrent Design at Motorola

- Top-Down Design: Evolving Process

- Great Top-Down Designers of the Past

----------------------------------------------------------------------

- Data Exchange and Archiving

Asa Trainer remarked this was the sixth consecutive year he has presented this same subject for PTC. And he gave his usual thorough and systematic talk, covering a wide variety of interfaces and methods. He had a good slide to show the importance of data exchange: a transmission with parts from 6 different sources, in 6 different formats (CATIA, UG, IGES, STEP, and so forth, a common hodgepodge).

At one point years ago PTC sales reps. often said the solution for data exchange was for everyone to use Pro/E. PTC has come a long way since then, and in the Nov. 2001 STEP benchmarks (organized by ProSTEP, the STEP standards group) Pro/E was highest scoring on both import and export, with 8 different other packages. Wildfire 2.0 will support STEP AP214, which is popular in Europe. Wildfire 3.0 will include GD&T support in STEP, based on Wildfire 2.0 supporting annotated features.

2D import/export now uses wizards, to give an alternative to specifying all those options in your config.pro. The import data doctor for 3D is continually being developed, so you can work on subsets of the data (usually a better approach than trying to do everything at once), or freeze certain surfaces you don't want to change in later imports, or split surfaces to solve u/v parameter problems.

Parasolid import and export (hidden in Wildfire) is improved.

The CAT II interface (CATIA V4) lets you set the CATIA model size and accuracy when you export, in order to avoid problems in CATIA with Pro/E's use of relative accuracy. Believe it or not, the default for model size in CATIA is 10 meters (they must have been dreaming of airplanes), but most CATIA users set it to more like 1 or 2 meters. Wildfire 2.0 will include a new translator (a separate package, see your sales rep.) for CATIA V5, which doesn't even need a CATIA license. The geometry is better quality, but CATIA gets most of the credit, just because it has a better geometry engine on V5.

IGES still gets attention, you can export cross-hatching as an entity. And also import DXF blocks as drawing symbols, that's often the most appropriate equivalent in Pro/E to the use of blocks in AutoCAD.

Facetted models are supported now for design and context, including mass props, clearance, cross sections, and datums. STL, VRML, STEP facetted, CATIA SOLM. With DXF and ProductView for Wildfire 2.0.

For UG, there is a Granite gateway, reading up to V18, and writing V18, and uses the ATB to update changes. IDEAS is import only, reading up to V9, and no plans to export to IDEAS, if EDS isn't going to keep IDEAS. UG NX is in-house now at PTC, and may be supported on Wildfire 3.0.

Writing out to previous versions of Pro/E (Cross Release Interoperability, or CRI for short) works via an ATB netural file. But a separate file is hard to track, and generates a single import feature. Wildfire 2.0 will include identifying features in the feature tree, or from the feature tree to the model. ATB will include updates. Later improvements will include Views and Explode States. On Wildfire 3.0 we may get a different approach, using Granite. Granite has an "understanding" of the model, but it doesn't have the "recipe" to modify it.

AutobuildZ is an entirely free tool, available at the ptc.com download page, to generate 3D geometry from typical ortho/section/detail views. Extrude/Revolve/Hole/Datum are supported, and it validates the profile automatically. It will be built into Wildfire 2.0.

The name Pro/Batch is gone, replaced by Distributed Services. But it does the same job, all the data interfaces and ModelCHECK and printing/plotting.

IGES was originally developed by several companies together (like, Boeing and GE) as a way to archive CAD data securely, in a public neutral format, so it could be read anytime, regardless of versions and vendors and hardware. Then it became to be used for data exchange. So now, in reverse, archiving is looking to STEP, which was originally developed for data exchange but is now useful for archiving---for the same reasons as IGES was useful, the public neutral format.

Archiving would include here industries like steam generator turbines, where a mfr. might need exact information 20 or 30 years from now, when Pro/E may be just a memory. Of course, a fully dimensioned drawing is still the very best archive for mech. design (preferably a physical copy on microfilm, still the best archive medium). But you may not be making those drawings any more, or you may want to archive the 3D data in addition to drawings.

----------------------------------------------------------------------

- Customize for Business Success

Paul Crane is in a central engineering postion at John Deere in Moline, and he does technology assessments of PTC software across the company. He sees a wide variety of business groups, since making a tractor is not at all like making a combine. And he's looking for opportunities to bridge the gap between what a tool like Pro/E can do, and what a common process requires, with a custom program. But it's important that a custom program be well justified and well used.

The most important general observation Paul had to make was probably this:

automating an inefficient process is pointless

That's a common observation, but it still happens, over and over again.

Paul had 3 examples of custom programs:

- updating Pro/E files to Deere company standards. One part of the problem of maintaining company standards is that it is tedious work, no one wants to do it really. So a program fits. The Deere program designates parameters as needed, moves items to layers, orients views, checks relations, renames and reorders datums. It got its biggest single use when a group was moving to Intralink, but so far 607 users have saved 110000 hours with this program. Doesn't do so much, but is used a lot.

- a gear program to model internal and external helical gears in Pro/E. This isn't a program for designing gears, Deere has other tools to do that. But it creates the corresponding models for Pro/E assemblies. This program is used less often, but does save more time, at least 1/2 hr. or more per use.

- JDNest, a sheetmetal nesting program. It takes not just Pro/E outlines, but DXF and IGES too. You can copy the results between sites, and it can run static mode (same parts every time) or dynamic (real time mfg., any combination of parts on one sheet). There is a saving here of NC programming time, but also a real mfg. savings of material with efficient nesting of parts. This program gets a lot of use, and a lot of savings.

You can see Deere keeps tabs on these programs after they are released: how often they're used, and who uses them, and also the cost savings. There's a tip for any custom program, and it's not hard given a company

network: collect the information who uses the program and how often. That can help justify the program itself, and also other programs afterwards. If you don't know even how often a program is used, you can lose touch with the users. Paul showed charts of the data, showing highs and lows in the use of different programs over time, and also number of daily users (since one person alone could run it more than others). These charts gave a good deal of insight into how the programs are used.

PTC is in somewhat of the same situation: PTC sells a bunch of software to a company, but any vendor can have a hard time figuring out if people are using their software, and how much, and in what areas. Any software vendor could provide better and more timely support if they knew more about how their product was used, and that information is also possible over the Internet. Could be at some point customers have a choice whether to send that information to PTC: some might want to do it, others not.

Paul pointed out costs of custom programs, like training, Pro/Toolkit license if needed, development time. And also maintenance: it's been said that 80% of the cost of creating a software program is in the maintenance, after initial release. And that's true just as much of a relatively modest home grown custom program.

Just to be complete, Paul also listed risks of custom programs: those maintenance costs may increase, and vendors can supply the functionality at any time. You absolutely want to avoid creating a new process by adding a program: that's not the point, the point is to aid an existing process, by bridging a gap between a tool and the process. If you're creating a new process with your new custom program, you're probably creating problems instead of solving them.

----------------------------------------------------------------------

- Concurrent Design at Motorola

Motorola made a major contribution to the user group by presenting the results of years of work to get industrial design and engineering to share the same Pro/E assembly successfully. The two presenters were Tim Sutherland representing ID, and Scott Bots representing Engineering. That was very useful just to see the two personalities interacting in their typical ways, ID always wanting to change anything on the exterior and Enginnering struggling to keeps some features fixed and stable on the inside.

The example was cellphones. You might think that the external (customer

visible) features of a cellphone would become stable early. But no, there are many variations on any basic cellphone model, often just the exterior appearance. Just one customer may ask for a selection from several different designs, all varying just by the exterior. And a cellphone isn't

trivial: there can be 1000 dimensions down one side (mirrored), and up to 10 engineers working on different areas of the interior, like the board and keys and switches and a display and so on.

Four years ago they were using the Master Model method, but with poor geometry quality (occasional visible blips), unsymmetrical surfaces (they did mirror, but after the mirror later changes might appear), and a design which wasn't very flexible. Which didn't match the need to produce many variations on a design. And they even had strange imports from outside the Pro/E world, like Alias geometry. Back then they used no splines, just line segments (perhaps because of imported curves). And the Master Models weren't robust enough to support detailing: usually they'd develop the MM until it wouldn't shell any more, then have to add detail in target parts. The master part had about 500 features to it.

Now they have a process. A major change is they use splines, all native Pro/E splines. Tim emphasized making splines as simple as possible, minimum number of control points. People often think more control points must be better, but that's not so, because you start getting kinks and bends and the geometry starts getting complex fast and then fails easily. If you start simple, and get the end tangency the way you want, you may not need much more to finish the spline.

A typical phone design begins with 2 parting quilts in space, representing the upper and lower parting lines (usually there's a vertical band all the way around the outside, between the parting lines). Those 2 quilts are in the first 5 or so features in the part, fundamental. From those two surface quilts, the top quilt is created, the top surface. The top quilt gets developed until it doesn't shell anymore (always a limit there), and then you go back a step and offset the top quilt to get an inner quilt. The master model is always all surfaces.

Keypad and mike and speaker holes are part of the top quilt, but penetrate down through the inner quilt. That way they should still intersect after any changes to the inner quilt. Tim recommended using the sketcher approximate splines "judiciously", typically when you're combining a spline with an existing surface. At this point half of the phone is designed, and here it gets mirrored to create the other half. By the way, those keypad holes on a cellphone are called "chimneys", that's the technical term.

On the inside of the phone, the core side, where those engineers are working, they make a comfortable and self-explanatory environment for the engineers by creating an "Engineering Home" coordinate system (that's the actual name). The engineers use that, and not the default coord sys which is sitting down in a corner somewhere. ID doesn't use Engineering Home at all, it's just for the engineers on the inside. Something they can trust.

To convert the quilts into a solid, a solid block is placed around the master model, but not extending past the parting quilts. The inner quilt cuts the block, and then the material inside the inner quilt is removed, making the cavity. The inner surface is basically simple, unlike the outside. If wall thickness changes, they just offset from the inner quilt. After removing the inner material, then they remove the outer material from that solid block, and they have the thinwall part itself. And in general the outer skin can fluctuate without affecting the bosses and ribs and other internal features.

Engineers now work in insert mode, always seeking stable geometry, back in the model tree between the inner quilt and the outer quilt (outer quilt was created later, after the inner quilt). As usual, drafts and rounds are created as late as possible, with intent surfaces for drafts and intent chains for rounds, to tie those features to underlying geometry, and not just to some edge. A round on 4 edges will fail if they become circular (variations on a boss, for example), but not if it's an intent chain round. Features like ribs and bosses on the inside are "overbuilt", extend out to the outside of the solid block, so they can't fail due to a change in the internal surface, when ID modifies the exterior.

Now the engineers exit the insert mode, and create interior geometry that has to depend on outside surfaces, the surfaces that ID plays with constantly. This obviously is a risky step, and geometry created by the engineers here may not survive a redefine, engineers could lose 5% to 20% of their work at this step because of an exterior change. But usually for an engineer to fix the problem takes about an hour of work in resolve mode, while before it used to mean starting over from scratch.

So a major feature of the process seems to be that risk is accepted, and the area at risk well defined and known to everyone. For the engineers who want to avoid risk, their protection is to work as early in the design as possible, in insert mode, inside the inner quilt. While ID will work at the end of the model tree, manipulating the final outside surfaces.

Tim said they used ISDX little, because they need to manage dependencies (like the board inside). They did use ISDX once for a large lens, to get precise continuity, because Pro/E "doesn't like surface continuity". A tip for visual adjustment of a spline is to IGES it out and back in, and then use that IGES spline as a guide for changing the original spline.

In answer to a question, Scott said they use a skeleton as needed to provide shared references in the interior of the phone (like two matching bosses in the top and bottom halves).

----------------------------------------------------------------------

- Top-Down Design: Evolving Process

Brian Adkins from John Deere gave what could be called a pretty sophisticated presentation on Top-Down Design (TDD). Sophisticated because he was emphasizing the overall process, breaking a larger task into small pieces (and then assembling them back again, not forgetting that vital step). This kind of process view usually seems to happen only after some time, with any new technology.

So Brian mentioned various tools used with TDD, skeletons and layouts and simplified reps. and whatnot. But he just mentioned them in passing, as tools you can use. But his interest was the process. There's no tool that defines TDD, not even a skeleton.

Brian was interested enough in the Top-Down Design process to find out the origins of the name. And it turns out it isn't a PTC name, or even a mech. design name. The phrase was originally used by Niklaus Wirth, a computer scientist who invcented the Pascal programming language, in a paper back in 1971. And he was just talking about software design. But he had the essential point, to break problems down until solutions become easy. Divide and Conquer is Brian's favorite way to describe that general approach. For Pro/E and that TDD, he suggested: "efficiently distribute design tasks among multiple users and prevent downstream problems".

Or, Brian proposed, instead of saying "Product First", try saying, "Structure First". That does suggest the kind of orientation that can make TDD succeed. Brian mentioned at Deere there are managers who want nothing to do with TDD. It would be interesting to know how many PTC customers have succeeded with TDD, and how many have failed. Could be the numbers are about equal, say.

You'd think that the first step towards success with TDD would be to send people to class. But Brian pointed out that the typical TDD class is very routine and regimented, and gives the students a script to follow, use these particular tools to get these particular results. In that kind of class, there isn't enough attention to planning and structure, which probably make most of the difference between success and failure.

So after people return from TDD class, Deere tries to salvage a chance of success with TDD by introducing them to TDD planning sessions. In a planning session, there is a screen showing Pro/E. But then there's another screen, side by side, serving like a whiteboard for diagramming TDD structures. Brian uses Visio as the appropriate tool for the virtual whiteboard, because it has many symbols and ways to describe relationships. So it's easy to sketch relationships between components, find out what kind of TDD structure looks good for a particular project (and that can vary, from one project to the next).

There are weak points and failure modes to consider, also the people on the project and their experience, also downstream uses of the TDD data. What might work fine for one project might be a real failure for the next, depending on these kinds of issues. Is the product going to be actually configured in Pro/E, or in Windchill, or in MRP, or somewhere else (and then, what about simp. reps, Pro/Program creation of parts, family tables, manual drawing changes to BOMs, will they be in that final configuraion). Motion analysis is another issue, motion and TDD are "like oil and water".

And then you might use map parts, or copy parts, or copy geoms, or you might not use any of them. You might use skeletons, or you might not use any skeletons (use external data sharing instead). If you do use skeletons, you could have a separate skeleton control assy, and use copy geom then to get the info over to your assy. There may be a trend here to reducing the use of skeletons within a TDD assy.

You probably want to think in advance, and document, how long your external ref. paths will be. What if one breaks? Will you even know it broke? What will it take to fix it? Is there a level in the assy above which no external refs are permitted?

Again thinking of sketches and diagrams, Brian suggested describing the information flow in a proposed TDD assy. For exmaple, if the information flows from A to B to C to D, you don't want to see a reverse current flowing back from C to B. You can use the Global Ref. Viewer within Pro/E itself to look at those arrows, those flows.

----------------------------------------------------------------------

- Great Top-Down Designers of the Past

As Brian was talking about the importance of planning and structure for Top-Down Design, I wondered about some of the great designers of the past. I'm thinking of the mechanical designers of 70 years ago, say (the 1930's), who created airliners and battleships and railroad engines and every other kind of large machine, with just board and paper and ink.

Seems to me that those great designers, and there were hundreds of thousands of them just in the US alone, had to have a very sophisticated and deep knowledge of Top-Down Design. As teams and departments and companies, they had to know how to break down the biggest projects into the smallest necessary pieces, defining interfaces (although, they probably didn't use that word) and dependencies and rules all over the place.

So why are we having problems with planning and structure of large designs now, 70 years later? Don't we know more now than they ever did?

Well, perhaps we don't know as much, in the planning and structure of large mechanical designs. Back then, if you had a few hundred designers, they couldn't begin to do any work until planning and structure were complete. There was nothing they could do, just sitting at the board, until that job was done first.

Now however any person or group can start designing on a computer without paying any particular attention to planning and structure of their project. Perhaps our general approach (and in other areas besides mech. design, too) is that because the computer makes changes easy, we don't really need to plan as much in advance. Even though generally we find there's a price to pay afterwards when the changes do come. Time to market drives a lot of us, and sitting around planning doesn't look as much of a contribution to time to market goals as banging away on keyboard and mouse.

The advance planning and structure techniques that were routine and fundamental in large mechanical design 70 years ago probably now survive more in large software design and large electrical design (like microprocessors). Ironically, software and electrical design have produced mech. CAD, and mech. CAD has made it easier to start large mech. projects without thinking so much about planning and structure.

The story of Top-Down Design among Pro/E users, as Brian told it, seems to advance from concentrating on tools to concentrating on process. So if we work on process and planning and structure, some day we may share the same intuitive and deep understanding of Top-Down Design as those great Top-Down designers of the past.

-----------------------------------------------------------------------------------

A users perspective on the Orlando, Florida PTC/User show --- June 10, 2003

Email newsletter

by Peter Nurkse, Sun Microsystems

brought to your desktop

by PTC/USER and PTC and Sun Microsystems

Again this year user presentations were about a third of the total, and the rest of the presentations were mostly by PTC or by other vendors. It is good to see PTC supporting the conference so thoroughly with people and other resources. But it might be worthwhile for the user community to pause and reflect and think a moment how to improve user participation.

I've mentioned that before. It's like a perpetual question, how to get more user presentations at user groups (all kinds of user groups, not at all just PTC/USER). And a good question, since it seem some of the most significant presentations come from users. Vendors can speak about tools, but users speak from the heart of the process, where the tools are applied and used, and problems and solutions emerge there that the vendors don't know and can't anticipate.

So how could users participate more? If you've never presented at a user group and are positive you never will, you're already a good resource on this issue, you represent the vast majority.

I'll make a suggestion: the user group could offer to provide editors for accepted presenters. An editor would be someone with writing and presentation skills and experience, but not necessarily any technical depth (like me, for example). The editor would work with the presenter over a 2 or 3 month period remotely to develop the presentation, and do basically the grunt work. The presenter would still add the real value.

Since public speaking is a major deterrent for most people, at the conference itself perhaps the user group could offer to provide speakers. The speaker would be someone who knows the presentation, and who does the main delivery, with the presenter beside them on the stage. Presenter would have to be on the stage, and perhaps occasionally correct or amplify the speaker, but the presenter wouldn't have to do the talking. Just because we know public speaking is a positive deterrent for most people (unless you're a vendor, in which case it comes with the job).

So there's one suggestion. If you have any kind of suggestion how to increase user participation at the conferences, send them in to PTC/USER (info@ptcuser.org is the general address). Even a small increase in user participation could be very valuable for everyone, and PTC as well.

----------------------------------------------------------------------

Contents

- ProjectLink the Housewife

- Work Smarter with Pro/E

- Savannah Prologue

----------------------------------------------------------------------

- ProjectLink the Housewife

Anton Greeff from South Africa added some depth to a ProjectLink talk with an introduction based on his own experience with different projects over the years. He mentioned the project management philosophies of the last 30 years (PERT, Critical Path, TQM, etc, and most recently Tipu Ake, which is based on the traditional wisdom of the original Maori inhabitants of New Zealand).

But then he pointed out that projects still come in late and over budget routinely. He thinks the reason is that Project Management assumes a General giving orders and pulling strings. While he thinks the requirement for a successful project is Project Execution, which needs someone like a Housewife (or a Househusband, plenty of guys take on that role), asking how people are doing and checking that kids do wash behind their ears, less glamorous work but more vital.

Anton has recently taken up golf, and he finds a similarity between project managment and golf: knowing exactly how to hit the ball isn't enough, and it's your follow through that will really determine distance and success. So it's not just the original plan, it's the follow through that also matters for a project.

Traditional project management has a huge amount of communication, often handled by the project manager manually. So the manager tends to spend his days on phone and email, and can't make any other contribution. People miscommunicate ("Hey, I wasn't in the meeting, I didn't know"), or misunderstand ("Thought that was due next week"), or exagerate ("80% complete now"), or use understatement ("A few small changes"), or just lie outright ("Didn't you get my email yesterday?!"). Anton has seen that all.

Anton identified three types of project communications: project task dissemination, project status monitoring, and change control (the biggest task).

And so where does ProjectLink fit in all this? Well, ProjectLink isn't the General, laying out the project in advance. Rather, ProjectLink is the Housewife/Househusband, monitoring the project execution as it proceeds. Not the glamorous role, but a very vital role we can all agree, and very important to the success of any project.

So ProjectLink folders make a place to store project data in one place (specs, plans, quotes, as well as CAD related). Team members get their tasks by automatic emails and personal pages. ProjectLink reminds team members of deadlines, and then reports missed deadlines (someone didn't mark their assigmment as complete by the set time). Since ProjectLink does depend on people reporting on their tasks, there is still room for exageration and outright lies. But miscommunication and misunderstanding can be substantially reduced.

Anton had a really constructive and positive approach to a missed

deadline: go and speak to the guy and find the problem. This kind of constructive and personal approach is probably as important as any tool for a project to succeed.

Being workflow enabled, ProjectLink lets you drag and drop icons for different tasks to create a graphical workflow. Being Web based, Anton said it allows for access by any employee, supplier, or contractor. Although if your company has open access to Web sites within a firewall, letting people have HTTP access through the firewall may be a risk, since they could visit any Web site inside the firewall freely.

ProjectLink can also encourage communication, with discussion forms and distribution lists tied to particular subjects.

Implementing any project, Anton had some wise advice, like, "choose your non-technical people wisely". Often it seems that the non-technical people will probably not have much to say, so doesn't really matter who they are. However once on the team even if they aren't major contributors, they can become major obstructions,

Probably the biggest single possible task Anton mentioned was to "formalize company standard operation procedures and processes". Could be a life work there. But he had a simple example, of a telecom company that hired him to find out why they were taking 3 weeks to do simple design changes. And he found they used a form that had to pass though 16 hands, and get 11 signatures. So don't try to simply replicate a paper process.

ProjectLink has different templates for different project types. As an example of a template idea, Anton used a support call to a help desk. Now that's a good idea for a sample process that every PTC customer can understand personally: the help desk. Perhaps the PTC help desk in particular, we must almost all have experience with that. So whether our experience is Pro/E, or CADDS, or Pro/Desktop, or PDMLink, or another PTC package, we can all relate to help desk process examples.

----------------------------------------------------------------------

- Work Smarter with Pro/E

Ron Grabau gave this stimulating presentation. He had a flow of ideas how to work smarter (and not work harder). Seems HP has about 400 Pro/E users. There may be a few of them in Colorado, at a company HP acquired several years ago, but the others are at HP in Houston, formerly Compaq.

Ron works on parts like a bezel, with 1000 to 2000 features, which can take an hour to regen. He doesn't understand people who say they don't have even a few minutes to fix a part problem, but who later end up spending days watching regens. Ron and another guy once had two bezels, very similar, 1000 to 1200 features each. The same change had to be incorporated in each one, and it took Ron 4 hours, and it took the other guy 4 days.

PTC has a good similar story about a large assembly at one customer, which took 2 entire days to regen. PTC went on-site, and without

changing anything about the content of the assembly, simply cleaning it up and restructuring it, they reduced the regen time down to under an hour. So regen time doesn't have to be fixed, perhaps usually there's room for improvement.

Ron said the main key to working smarter is a set of company best practices, which are distributed to vendors too. It doesn't matter how many good ideas people have, if they aren't written down in a best practices document.

Many of Ron's suggestions echo the Motorola cell phone design process. The mechanical design of cell phones and workstations is different, but the Pro/E recommendations can be the same. Here are some of these shared Motorola/HP recommendations:

- keep reference control in mind at all times, know where your references for a new feature will lead

- never just take the default feature creation at the end of the model tree without questioning it. Insert all new features as early as possible in the model tree, keep moving the insert point up, to get references to early geometry

- leave drafts and rounds as late as possible (and do the big rounds before the smaller rounds)

- use Through Next where possible, so that extrusions will be more robust

- use Intent Surfaces for drafts, and Intent Chain for rounds, so that the underlying surfaces will be used, and not just the edges which can change in major ways

Then Ron had lots of other recommendations:

- define mapkeys to define and restore 3 or 4 temp. views (call them V1, V2, V3, V4). With one mapkey you define a particular temp. view. Move around anywhere, and with another mapkey you go back to that view (which includes orientation as well as zoom state). Definitely smarter than zoom in/zoom out/zoom in/zoom out etc. Four temp. views are probably enough to keep in mind. When you're ready to add another one, just define one of the existing views again (that's why they are temporary, they constantly get redefined).

- establish visible markers in the model tree just by creating an entity like a point, and giving it a name. The point is nothing (some people just let them pile up at the default coord. sys.), but the name in the model tree is everything. For example, put a marker, "Drafts & Rds", right at that point in the model tree where you start creating drafts and rounds.

- shelling is a big event in the life of a part. Before you shell anything, take a look at it, clean it up if possible, get confident this is what you really want. After the shelling, you may not be able to go back quite as easily, you'll be making more decisions.

- use layers regularly just to group features. One particular goal here is to be able to suppress and resume groups of features easily. Suppressing is good for speed, when you don't need everything, which is usually the case. And selecting features to suppress by layers is more efficient than hunting them down in the model tree too. And regular regens all along let you catch and identify a particular regen problem early.

- if you have a feature (like a boss) on a layer, don't forget to add any later dependent features (like a hole) to that layer too. Otherwise you won't be able to suppress by layer, because a dependent feature would be left dangling.

- leave patterns to the end. A hole pattern is usually not a key feature. You can mark the hole locations with points, that's fine (sometimes people leave points for holes in the finished model, if the mfg. engineer knows what to do with those points---like using crosses for holes in a drawing).

- always good to dimension when appropriate to the default datum planes, which are at the beginning of the part history and as stable as you can get. But don't be like the guy who took this advice literally, and dimensioned absolutely everything to the default datums.

- even if you have the same draft all over, don't do all that draft in one step. Because if you have problems with the draft later, you may not have a clue where to look for the source of the problem. While if you apply the draft to particular areas, one area at a time, whenever one of those drafts fails you'll know to look just in that area of the model.

- what if you create a boss, draft it, add rounds, then pattern it? Well, you've just broken every rule in Ron's book, because he is firm that drafts and rounds need to come as late as possible, only then.

That last example illustrates a good point, that building a quality model that regens as quickly as possible, and that is easy to understand and to fix, definitely does take more time. But the choice is pretty clear: either spend that extra time creating the model, or else spend a good deal more time watching regens and trying to understand and fix problems later. Even days more time.

----------------------------------------------------------------------

- Savannah Prologue

There's a whole world outside Orlando. Like the Everglades, where the animals and plants change with the ecological zones, every foot, from sea level up to 7 ft. (the highest point, dozens of miles inland). Or Key West, the islands out at the end of an ocean highway, many miles driving over the water and the other islands in the chain.

On this trip I decided to head north along the coast, into Georgia, up to the north end of the Georgia coast, to Savannah. About 250 miles from Orlando, enough to make you feel you have travelled.

And Savannah turned out to be entirely different from any town in Florida. It's an old sea port, almost 300 years old now, the first major port in the South. And it's still a working port today, with real ocean cargo ships passing up and down the river. Plus some spectacular yachts. When I was there a 4 masted schooner was visiting from Curacao, masts over 80 ft. tall.

Savannah itself was carefully sited on the top of a row of bluffs, overlooking the river and the port. Probably mostly just to catch the breezes, the location is good for that.

The city was laid out by James Oglethorpe, the first British governor. And he had a really ingenious and creative city plan. The city streets were laid out in a square grid pattern. But every two blocks, they dedicated one block of land as a city park. And those one block parks are placed in the middle of the intersections, so traffic takes a route around each park.

The result is that everywhere you walk in the downtown area, you're just one block away from a park, there's a park at every other corner. There are over twenty of these one block parks, spaced two blocks apart, and they can be very different. Some have many trees, some a few trees. Some more bushes, some more lawns. Some a fountain or two, some statues, some flowers. Some are more circular in layout, others more rectangular. Some have ferns, one has a gazebo, another has a children's play area and a basketball court. Each one has a sign with its name.

Most of the buildings around are still residential, older brick and stucco and some wood buildings, typically two or three story townhouses, probably mostly 19th century. There are stores and offices and businesses, enough to keep the downtown area alive, it's not just an articifially preserved tourist trap. Things like abandoned bathtubs out in back yards, planted with flowers, show an individual touch.

All this area could have been redeveloped years ago, with condos and hotels and whatnot. But in the 1950's, a group of seven local women organized to stop the destruction of one older landmark, a vegetable and fish market. They failed there, but out of their action there developed a Historic Savannah Foundation, which by this time now has bought up hundreds of older properties, and then sold them to interested people with covenants to restore and repair them. That explains why now we have a living older community in Savannah. And this Foundation helped develop an interest in historic preservation all around the US.

To walk through downtown Savannah, and visit every single square, takes perhaps a couple of hours, stopping and looking along the way, enjoying the variety of the architecture. It's a neat excursion, and you won't find anything like it in Florida, even in the amusement parks.

If you go to Savannah, it's worth stopping at Jekyll Island on the south Georgia coast. Over 100 years ago it was an island of exclusive vacation homes, and somebody has calculated that the familes which vacationed back then on Jekyll Island controlled one-sixth of the world's wealth at that time. Since they included J.P. Morgan, perhaps that was true.

Today the place is not so high-society at all, there's even a Days Inn for the rest of us. But it has a long wide beach on the ocean, and dense forests inland, with a tropical feeling. If you walk on one of the overgrown paths through the forests, you feel closed in on all sides, vegetation everywhere, definitely an intruder in that world. Vines climb up to the tops of the trees, and vines descend from the tops down to the ground. The vines almost own the place, inside those forests.

And yet you're only steps away from the dunes and the wide open beach, lots of contrasts.

Ссылка на сообщение
Поделиться на других сайтах


Присоединяйтесь к обсуждению

Вы можете опубликовать сообщение сейчас, а зарегистрироваться позже. Если у вас есть аккаунт, войдите в него для написания от своего имени.
Примечание: вашему сообщению потребуется утверждение модератора, прежде чем оно станет доступным.

Гость
Ответить в тему...

×   Вставлено в виде отформатированного текста.   Вставить в виде обычного текста

  Разрешено не более 75 эмодзи.

×   Ваша ссылка была автоматически встроена.   Отобразить как ссылку

×   Ваш предыдущий контент был восстановлен.   Очистить редактор

×   Вы не можете вставить изображения напрямую. Загрузите или вставьте изображения по ссылке.

  • Сейчас на странице   0 пользователей

    Нет пользователей, просматривающих эту страницу.




×
×
  • Создать...